In sports, life sometimes looks relatively easy: After the race, the athlete is either on the podium or is not. A team is either winning the Champions League or is not. Somebody is either breaking a world record or is not. There are winners and there are losers. How simple can it be? Many of us know that it is not that simple. Is the winner really the best athlete? Did the training intervention give a performance benefit, and which performance-determining variable was improved? There is a lot of uncertainty in our day-to-day practice, while the world around us is asking for unambiguous answers. Life in sport (science) is not easy at all! In 2006, the International Journal of Sports Physiology and Performance (IJSPP) started to help us with these kinds of questions. As you can read in the mission statement, the journal promotes the publication of research in sport physiology and related disciplines that have direct practical application to enhancing sport performance, preventing decrements in performance, or enhancing recovery of athletes. Controlled experimental and observational research of a comprehensive or systematic nature is welcome at IJSPP, provided that appropriate standards of scientific methodology and analysis are met. This latter issue is not new. Several centuries ago, Roger Bacon recognized that absolute proof was not possible in science. One could only gauge the probability that a given answer to a question was likely right or wrong.
The scientific process always starts with a question and the desire to find answers to the question. We want to discover the truth or come at least as close (high probability) as possible to the truth. For the editorial team of IJSPP, the question examined in a submitted study always leads the first judgment of a paper. The second judgment is on the scientific methodology—the process used to find the answers and the analysis that is done to justify the answers. Recently, there have been extensive debates about commonly used statistical analyses. Pros and cons of magnitude-based inferences1,2 and null hypothesis significance testing3 have been discussed extensively. Since we, as the editorial team of IJSPP, are not qualified statisticians, but as users of statistical methods, beyond believing that the meaningfulness of observed results must be tested, we do not express our preference for a specific method. Accordingly, we ask potential reviewers of manuscripts submitted to IJSPP to do the same. Read manuscripts with great care; judge the quality of the question, scientific methodology, and analyses, but avoid rejecting manuscripts solely based on the statistical method used. Sport scientists wanting to submit a manuscript to IJSPP are asked to put extensive effort into reporting the statistical results of their study and the corresponding interpretation.
Researchers who wish to submit their work to IJSPP and want to use null hypothesis significance testing are advised to read the American Statistical Association statement on statistical significance and P values4 and the recent editorial in The American Statistician.5 As stated in our submission guidelines, exact P values should be reported. In addition, we agree with Amrhein, Greenland, McShane and more than 800 researchers6 that we should not treat P values or other statistical measures categorically and that we should embrace uncertainty. In most cases, studies conducted are a proof of concept, instead of a perfect replication of real-world scenarios. For example, to study the effect of a certain intervention on performance, we ask a sample of trained cyclists to twice complete a submaximal exercise bout followed by a fixed-duration time trial. However, it remains uncertain if the same results would have been obtained if the intervention were employed by all elite cyclists competing in the Tour of Flanders. As we are uncertain about the degree of transfer and generalizability of our study findings, we should also report this uncertainty. Our readers can then decide themselves if they are willing to accept this uncertainty or not, when using the research outcomes in sport practice.
In summary, high-quality sport physiology research with different types of statistical methods (eg, magnitude-based inferences, Bayesian statistics, null hypothesis significance testing) can be submitted to IJSPP and should get an unbiased review process if the following guidelines are met:
- •It poses an interesting research question that fits within the scope of IJSPP.
- •It employs sound scientific methodology and analysis.
- •Estimates are reported and interpreted with the corresponding uncertainty (confidence intervals or compatibility intervals6 and stating the level, ie, 90% or 95% confidence interval).
- •If P values are reported, they should be reported with precision, that is, P = .064 instead of P > .05.
- •Categorization is avoided.
- •Figures and tables are used purposefully to strengthen the analysis.
- •The likely real-world or practical/clinical/sporting significance is discussed alongside the statistical methods.
References
- 1.↑
Sainani KL. The problem with “magnitude-based inference.” Med Sci Sports Exerc. 2018;50(10):2166–2176. PubMed ID: 29683920 doi:10.1249/MSS.0000000000001645
- 2.↑
Hopkins WG, Batterham AM. The vindication of magnitude-based inference. Sportscience. 2018;(22):19–27.
- 3.↑
Szucs D, Ioannidis JPA. When null hypothesis significance testing is unsuitable for research: a reassessment. Front Hum Neurosci. 2017;11:390. PubMed ID: 28824397 doi:10.3389/fnhum.2017.00390
- 4.↑
Wasserstein RL, Lazar NA. The ASA’s statement on p-values: context, process, and purpose. Am Stat. 2016;70(2):129–133. doi:10.1080/00031305.2016.1154108
- 5.↑
Wasserstein RL, Schirm AL, Lazar NA. Moving to a world beyond “p < 0.05.” Am Stat. 2019;73(suppl 1):1–19. doi:10.1080/00031305.2019.1583913
- 6.↑
Amrhein V, Greenland S, McShane B. Scientists rise up against statistical significance. Nature. 2019;567(7748):305–307. PubMed ID: 30894741 doi:10.1038/d41586-019-00857-9