Low Prevalence of A Priori Power Analyses in Motor Behavior Research

Click name to view affiliation

Brad McKay Department of Kinesiology, McMaster University, Hamilton, ON, Canada

Search for other papers by Brad McKay in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0002-7408-2323
,
Abbey Corson School of Human Kinetics, University of Ottawa, Ottawa, ON, Canada

Search for other papers by Abbey Corson in
Current site
Google Scholar
PubMed
Close
,
Mary-Anne Vinh School of Human Kinetics, University of Ottawa, Ottawa, ON, Canada

Search for other papers by Mary-Anne Vinh in
Current site
Google Scholar
PubMed
Close
,
Gianna Jeyarajan School of Interdisciplinary Sciences, McMaster University, Hamilton, ON, Canada

Search for other papers by Gianna Jeyarajan in
Current site
Google Scholar
PubMed
Close
,
Chitrini Tandon School of Interdisciplinary Sciences, McMaster University, Hamilton, ON, Canada

Search for other papers by Chitrini Tandon in
Current site
Google Scholar
PubMed
Close
,
Hugh Brooks School of Human Kinetics, University of Ottawa, Ottawa, ON, Canada

Search for other papers by Hugh Brooks in
Current site
Google Scholar
PubMed
Close
,
Julie Hubley School of Human Kinetics, University of Ottawa, Ottawa, ON, Canada

Search for other papers by Julie Hubley in
Current site
Google Scholar
PubMed
Close
, and
Michael J. Carter Department of Kinesiology, McMaster University, Hamilton, ON, Canada

Search for other papers by Michael J. Carter in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0002-0675-4271 *
Restricted access

A priori power analyses can ensure studies are unlikely to miss interesting effects. Recent metascience has suggested that kinesiology research may be underpowered and selectively reported. Here, we examined whether power analyses are being used to ensure informative studies in motor behavior. We reviewed every article published in three motor behavior journals between January 2019 and June 2021. Power analyses were reported in 13% of studies (k = 636) that tested a hypothesis. No study targeted the smallest effect size of interest. Most studies with a power analysis relied on estimates from previous experiments, pilot studies, or benchmarks to determine the effect size of interest. Studies without a power analysis reported support for their main hypothesis 85% of the time, while studies with a power analysis found support 76% of the time. The median sample sizes were n = 17.5 without a power analysis and n = 16 with a power analysis, suggesting the typical study design was underpowered for all but the largest plausible effect size. At present, power analyses are not being used to optimize the informativeness of motor behavior research. Adoption of this widely recommended practice may greatly enhance the credibility of the motor behavior literature.

  • Collapse
  • Expand
  • Albers, C., & Lakens, D. (2018). When power analyses based on pilot data are biased: Inaccurate effect size estimators and follow-up bias. Journal of Experimental Social Psychology, 74, 187195. https://doi.org/10.1016/j.jesp.2017.09.004

    • Search Google Scholar
    • Export Citation
  • Appelbaum, M., Cooper, H., Kline, R.B., Mayo-Wilson, E., Nezu, A.M., & Rao, S.M. (2018). Journal article reporting standards for quantitative research in psychology: The APA publications and communications board task force report. American Psychologist, 73(1), 3. https://doi.org/10.1037/amp0000191

    • Search Google Scholar
    • Export Citation
  • Aust, F., & Barth, M. (2020). papaja: Prepare reproducible APA journal articles with R Markdown. https://github.com/crsh/papaja

  • Bacelar, M.F.B., Parma, J.O., Murrah, W.M., & Miller, M.W. (2021). Meta-analyzing enhanced expectancies on motor learning: Positive effects but methodological concerns. International Review of Sport and Exercise Psychology, Advanced online publication. https://doi.org/10.1080/1750984X.2022.2042839

    • Search Google Scholar
    • Export Citation
  • Barth, M. (2022). tinylabels: Lightweight variable labels. https://cran.r-project.org/package=tinylabels

  • Carter, E.C., Schönbrodt, F.D., Gervais, W.M., & Hilgard, J. (2019). Correcting for bias in psychology: A comparison of meta-analytic methods. Advances in Methods and Practices in Psychological Science, 2(2), 115144. https://doi.org/10.1177/2515245919847196

    • Search Google Scholar
    • Export Citation
  • Chang, W. (2022). Extrafont: Tools for using fonts. https://CRAN.R-project.org/package=extrafont

  • Chua, L.-K., Jimenez-Diaz, J., Lewthwaite, R., Kim, T., & Wulf, G. (2021). Superiority of external attentional focus for motor performance and learning: Systematic reviews and meta-analyses. Psychological Bulletin, 147(6), 618645. https://doi.org/10.1037/bul0000335

    • Search Google Scholar
    • Export Citation
  • Cohen, J. (1962). The statistical power of abnormal-social psychological research: A review. The Journal of Abnormal and Social Psychology, 65(3), 145153. https://doi.org/10.1037/h0045186

    • Search Google Scholar
    • Export Citation
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). L. Erlbaum Associates.

  • Correll, J., Mellinger, C., McClelland, G.H., & Judd, C.M. (2020). Avoid Cohen’s “small,” “medium,” and “large” for power analysis. Trends in Cognitive Sciences, 24(3), 200207. https://doi.org/10.1016/j.tics.2019.12.009

    • Search Google Scholar
    • Export Citation
  • Fanelli, D. (2010). “Positive” results increase down the hierarchy of the sciences. PLoS One, 5(4), Article e10068. https://doi.org/10.1371/journal.pone.0010068

    • Search Google Scholar
    • Export Citation
  • Gelman, A., & Carlin, J. (2014). Beyond power calculations: Assessing type S (sign) and type M (magnitude) errors. Perspectives on Psychological Science, 9(6), 641651. https://doi.org/10.1177/1745691614551642

    • Search Google Scholar
    • Export Citation
  • Iannone, R. (2016). DiagrammeRsvg: Export DiagrammeR graphviz graphs as SVG. https://CRAN.R-project.org/package=DiagrammeRsvg

  • Jack, O.W. (2019). PRISMAstatement: Plot flow charts according to the “PRISMA” statement. https://CRAN.R-project.org/package=PRISMAstatement

    • Search Google Scholar
    • Export Citation
  • Klein, R.A., Vianello, M., Hasselman, F., Adams, B.G., Adams, R.B., Alper, S., Aveyard, M., Axt, J.R., Babalola, M.T., Bahník, Š., Batra, R., Berkics, M., Bernstein, M.J., Berry, D.R., Bialobrzeska, O., Binan, E.D., Bocian, K., Brandt, M.J., Busching, R., … Nosek, B.A. (2018). Many labs 2: Investigating variation in replicability across samples and settings. Advances in Methods and Practices in Psychological Science, 1(4), 443490. https://doi.org/10.1177/2515245918810225

    • Search Google Scholar
    • Export Citation
  • Kraemer, H.C., Mintz, J., Noda, A., Tinklenberg, J., & Yesavage, J.A. (2006). Caution regarding the use of pilot studies to guide power calculations for study proposals. Archives of General Psychiatry, 63(5), 484489. https://doi.org/10.1001/archpsyc.63.5.484

    • Search Google Scholar
    • Export Citation
  • Lakens, D. (2017). Equivalence tests: A practical primer for t-tests, correlations, and meta-analyses. Social Psychological and Personality Science, 1, 18. https://doi.org/10.1177/1948550617697177

    • Search Google Scholar
    • Export Citation
  • Lakens, D. (2022a). Improving your statistical inferences. https://lakens.github.io/statistical_inferences/

  • Lakens, D. (2022b). Sample size justification. Collabra: Psychology, 8(1), Article 33267. https://doi.org/10.1525/collabra.33267

  • Lakens, D., Adolfi, F.G., Albers, C.J., Anvari, F., Apps, M.A., Argamon, S.E., Baguley, T., Becker, R.B., Benning, S.D., Bradford, D.E., et al. (2018). Justify your alpha. Nature Human Behaviour, 2(3), 168171. https://doi.org/10.1038/s41562-018-0311-x

    • Search Google Scholar
    • Export Citation
  • Lakens, D., & Evers, E.R. (2014). Sailing from the seas of chaos into the corridor of stability: Practical recommendations to increase the informational value of studies. Perspectives on Psychological Science, 9(3), 278292. https://doi.org/10.1177/1745691614528520

    • Search Google Scholar
    • Export Citation
  • Lohse, K., Buchanan, T., & Miller, M. (2016). Underpowered and overworked: Problems with data analysis in motor learning studies. Journal of Motor Learning and Development, 4(1), 3758. https://doi.org/10.1123/jmld.2015-0010

    • Search Google Scholar
    • Export Citation
  • Lovakov, A., & Agadullina, E.R. (2021). Empirically derived guidelines for effect size interpretation in social psychology. European Journal of Social Psychology, 51(3), 485504. https://doi.org/10.1002/ejsp.2752

    • Search Google Scholar
    • Export Citation
  • McKay, B., Hussien, J., Vinh, M.-A., Mir-Orefice, A., Brooks, H., & Ste-Marie, D.M. (2022a). Meta-analysis of the reduced relative feedback frequency effect on motor learning and performance. Psychology of Sport and Exercise, 61, Article 102165. https://doi.org/10.1016/j.psychsport.2022.102165

    • Search Google Scholar
    • Export Citation
  • McKay, B., Yantha, Z., Hussien, J., Carter, M.J., & Ste-Marie, D. (2022b). Meta-analytic findings in the self-controlled motor learning literature: Underpowered, biased, and lacking evidential value. Meta-Psychology, 6, Article MP.2021.2803. https://doi.org/10.15626/MP.2021.2803

    • Search Google Scholar
    • Export Citation
  • Mesquida, C., Murphy, J., Lakens, D., & Warne, J. (2022). Replication concerns in sports science: A narrative review of selected methodological issues in the field. SportRxiv. https://sportrxiv.org/index.php/server/preprint/view/127

    • Search Google Scholar
    • Export Citation
  • Neuwirth, E. (2022). RColorBrewer: ColorBrewer palettes. https://CRAN.R-project.org/package=RColorBrewer

  • Neyman, J. (1937). “Smooth test” for goodness of fit. Scandinavian Actuarial Journal, 1937(3–4), 149199. https://doi.org/10.1080/03461238.1937.10404821

    • Search Google Scholar
    • Export Citation
  • Neyman, J. (1942). Basic ideas and some recent results of the theory of testing statistical hypotheses. Journal of the Royal Statistical Society, 105(4), 292327. https://doi.org/10.2307/2980436

    • Search Google Scholar
    • Export Citation
  • Ooms, J. (2022). Rsvg: Render SVG images into PDF, PNG, (encapsulated) PostScript, or bitmap arrays. https://CRAN.R-project.org/package=rsvg

    • Search Google Scholar
    • Export Citation
  • Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), Article aac4716. https://doi.org/10.1126/science.aac4716

    • Search Google Scholar
    • Export Citation
  • R Core Team. (2021). R: A language and environment for statistical computing. R Foundation for Statistical Computing. https://www.R-project.org/

    • Search Google Scholar
    • Export Citation
  • Rousselet, G.A., Pernet, C.R., & Wilcox, R.R. (2017). Beyond differences in means: Robust graphical methods to compare two groups in neuroscience. European Journal of Neuroscience, 46(2), 17381748. https://doi.org/10.1111/ejn.13610

    • Search Google Scholar
    • Export Citation
  • Rousselet, G.A., & Wilcox, R.R. (2020). Reaction times and other skewed distributions: Problems with the mean and the median. Meta-Psychology, 4, 139. https://doi.org/10.1101/383935

    • Search Google Scholar
    • Export Citation
  • Rudis, B., & Gandy, D. (2019). Waffle: Create waffle chart visualizations. https://gitlab.com/hrbrmstr/waffle

  • Thornton, A., & Lee, P. (2000). Publication bias in meta-analysis: Its causes and consequences. Journal of Clinical Epidemiology, 53(2), 207216. https://doi.org/10.1016/S0895-4356(99)00161-4

    • Search Google Scholar
    • Export Citation
  • Ushey, K. (2022). Renv: Project environments. https://CRAN.R-project.org/package=renv

  • Wickham, H., Averick, M., Bryan, J., Chang, W., McGowan, L.D., François, R., Grolemund, G., Hayes, A., Henry, L., Hester, J., Kuhn, M., Pedersen, T.L., Miller, E., Bache, S.M., Müller, K., Ooms, J., Robinson, D., Seidel, D.P., Spinu, V., … Yutani, H. (2019). Welcome to the tidyverse. Journal of Open Source Software, 4(43), Article 1686. https://doi.org/10.21105/joss.01686

    • Search Google Scholar
    • Export Citation
  • Wilcox, R.R. (2021). Introduction to robust estimation and hypothesis testing (5th ed.). Academic press.

  • Zhu, H. (2021). kableExtra: Construct complex table with ‘kable’ and pipe syntax. https://CRAN.R-project.org/package=kableExtra

All Time Past Year Past 30 Days
Abstract Views 1183 848 40
Full Text Views 362 168 1
PDF Downloads 255 48 1