I had been admiring his work for more than a decade, but the first time I met Pietro Enrico Di Prampero in person was not until 2010 at the French Football Federation. Our research unit invited Dr Di Prampero to discuss the premise of the metabolic power concept with all our staff. Before his formal talk, we had the chance to spend some informal time together, sharing a nice lunch. I will forever remember his responses to a series of short questions during our discussion:
Me: Hey, Pietro. Many of your papers have been cited more than 600 times, and your H-index is off the chart. Obviously, in academic terms, you’re a rock star. So what’s the paper you’re most proud of?
P.E. Di Prampero: Thanks, but I really don’t know. In fact, I’m not exactly sure what you mean … citation counts? H-what? Martin, I have simply tried to put on paper what I thought was correct and helpful for our community. The rest really doesn’t matter, right?
Me: … Well … sure … I guess. …
In the current “publish or perish” academic research landscape, such a discussion may sound completely bizarre. Today, bulimic academics (me included) list every publication they ever authored on their personal websites, advertise their Google Scholar page with their H-index in their CV, and share copies of paper acceptance notifications on social media (often without any content). Many prioritize quantity, using various strategies (“Hey, I am sure that with these data we can do another cool paper” or “Let’s form an alliance with other researchers to increase our publication rate”), over quality and substantial contribution to the field.
A crucial question for all of us today then remains. What is the motive behind our drive for publishing? Where is the “why”?1,2 Are we doing this to increase our academic profile? Or are we truly trying to bring something new and useful to practitioners? Are we doing this for us or for others and our field as a whole? As editors, we lose count of the number of times we are asked by students about the status of a paper that they “critically need published in order to complete their PhD due in 3 weeks.” Although asking this question does not mean that the paper cannot be part of something bigger, oftentimes these motivations are not in line with what they should be.
Following previous International Journal of Sports Physiology and Performance editorials on the importance of understanding the real needs of practitioners,3 asking the right questions, and generating relevant research4 to bridge the gap between science and practice,5–7 one of the most important issues to keep in mind when producing a paper is that it should be “usable” and allow effective translation of the findings to the field. The great Aaron Coutts recently insisted on the importance of clearly writing the “Practical Applications” section of a manuscript.8 But what about the actual study design, the data presentation, and analysis?
Too often, researchers end up compromising the potential applicability of their study to increase its chances of being published. This includes ensuring that “there will be some results” (they look for large magnitude effects and a large enough sample size to mechanically reach their small P value), oversimplifying intervention protocols to gain “control and feasibility,” and, sometimes, partially masking or cherry-picking data to make the paper “suitable” for publication. Although such strategies are effective in terms of publication probability, they systematically prevent the reader from finding relevant information that will assist practitioners, coaches, and athletes. As an example, practitioners would be interested in knowing the individual physiological and psychological responses to partial sleep deprivation after an away match. This is complex context as it includes stress related to the match, flights, suboptimal nutrition before sleeping from 6 to 10 AM, and so forth. To tackle this question, researchers have typically examined the effect of complete sleep deprivation in a lab and reported group average responses (bar graphs and no individual responses shown9). Of course, reproducing an exact real-life scenario is challenging, but there remain elements that must be considered even though they may reduce the likelihood of paper acceptance. We need to be clear on what we want when it comes to publishing. If it is having an impact that is meaningful,3 that is, a real effect on the field of sport performance, we (as researchers, reviewers, and editors) need to understand that less-controlled real-life scenarios, but ones that are appropriately analyzed and presented, are very likely more relevant than studies that are laboratory based with poorly interpreted data.4
If you were to ask a book editor for tips on writing the book you want to write, they would start by suggesting that you be clear on the following: (1) Why do you want to write it? (2) Are you sufficiently experienced and knowledgeable to write it, based on your background? (3) What is it going to bring to the current literature and field of understanding? and (4) Who is your audience? This applies 100% to our field (see Figure 1). More often, we as researchers need to ask ourselves why, how, what, and—more importantly—whom we are publishing this for. Is it for ourselves or others? Is it for science and understanding or for improving our LinkedIn and ResearchGate profiles? Of course, the response is likely (and should be) “for both.” Although generating better research outcomes oriented at improving our field of knowledge will benefit the researcher, publishing selfishly only for the sake of publishing compromises the relevance of manuscripts needed to reach our objective and will not take us anywhere. The latter initiative makes more “noise” and hides the important signal.
Clearly, the giants in our field became who they are because they produced top-quality seminal papers. Unwavering in their ethics, they did not compromise their methods or ideas to satisfy reviewers and journal editors. They showed less hubris and more humility.10 It is time for us to think of others first again.
Acknowledgment
Warm thanks to the irreplaceable Paul B. Laursen (HIITScience and AUT University, Auckland, New Zealand) for his comments on a draft of this manuscript.
References
- 1.↑
Clubb J. Starting with why in sports science. Sports Discovery Blog. May 6, 2018. http://sportsdiscovery.net/journal/2018/05/06/starting-why-sports-science-golden-circle/. Accessed July 10, 2020
- 2.↑
Sinek S. Start With Why: How Great Leaders Inspire Everyone to Take Action. New York, NY: Portfolio; 2009.
- 3.↑
Buchheit M. Chasing the 0.2. Int J Sports Physiol Perform. 2016;11(4):417–418. PubMed ID: 27164725 doi:10.1123/ijspp.2016-0220
- 4.↑
Buchheit M. Houston, we still have a problem. Int J Sports Physiol Perform. 2017;12(8):1111–1114. doi:10.1123/ijspp.2017-0422
- 5.↑
Haugen T. Key success factors for merging sport science and best practice. Int J Sports Physiol Perform. 2019;15(3):1. doi:10.1123/ijspp.2019-0940
- 6.
Sandbakk Ø. Let’s close the gap between research and practice to discover new land together! Int J Sports Physiol Perform. 2018;13(8):961. PubMed ID: 30189759 doi:10.1123/ijspp.2018-0550.
- 7.↑
Chamari K. The crucial role of elite athletes and expert coaches with academic profiles in developing sound sport science. Int J Sports Physiol Perform. 2019;14(4):413. PubMed ID: 30862224 doi:10.1123/ijspp.2019-0095
- 8.↑
Coutts AJ. Building a bridge between research and practice—the importance of the practical application. Int J Sports Physiol Perform. 2020;15(4):449. PubMed ID: 32182588 doi:10.1123/ijspp.2020-0143
- 9.↑
Clubb J. Why we need to rethink using bar graphs. Sports Discovery Blog. July 2, 2020. http://sportsdiscovery.net/journal/2020/07/02/why-we-need-to-rethink-using-bar-graphs/. Accessed July 10, 2020.
- 10.↑
Foster C. Sport science: progress, hubris, and humility. Int J Sports Physiol Perform. 2019;14(2):141–143. PubMed ID: 30663915 doi:10.1123/ijspp.2018-0982