Appropriate statistical analysis is essential for accurate and reliable research. Statistical practices have an immediate impact on the perceived results of a single study but also remote effects on the dissemination of information among scientists and the cumulative nature of research. To accurately quantify potential problems facing the field of motor learning, we systematically reviewed publications from seven journals over the past 2 years to find experiments that tested the effects of different training conditions on delayed retention and transfer tests (i.e., classic motor learning paradigms). Eighteen studies were included. These studies had small sample sizes (Mdn n/group = 11.00, interquartile range [IQR]= 9.6–15.5), multiple dependent variables (Mdn = 2, IQR = 2–4), and many statistical tests per article (Mdn = 83.5, IQR = 55.8–112.5). The observed effect sizes were large (d = 0.71, IQR = 0.49, 1.11). However, the distribution of effect sizes was biased, t(16) = 3.48, p < .01. These metadata indicate problems with the way motor learning research is conducted (or at least published). We recommend several potential solutions to address these issues: a priori power calculations, prespecified analyses, data sharing, and dissemination of null results. Furthermore, we hope these data will spark serious action from all stakeholders (researchers, editorial boards, and publishers) in the field.
Keith Lohse, Taylor, Buchanan, and Matthew Miller are with the School of Kinesiology, Auburn University, Auburn, AL.