Clinical Scenario: Lower-extremity injuries in the United States costs millions of dollars each year. Athletes should be screened for neuromuscular deficits and trained to correct them. The tuck jump assessment (TJA) is a plyometric tool that can be used with athletes. Clinical Question: Does the TJA demonstrate both interrater and intrarater reliability in healthy individuals? Summary of Key Findings: Four of the 5 articles included in this critically appraised topic showed good to excellent reliability; however, caution should be taken in interpreting these results. Although composite scores of the TJA were found to be reliable, individual flaws do not demonstrate reliability on their own, with the exception of knee valgus at landing. Aspects of the TJA itself, including rater training, scoring system, playback speed, volume, and number of views allotted, need to be standardized before the reliability of this clinical assessment can be further researched. Clinical Bottom Line: The TJA has shown varying levels of reliability, from poor to excellent, for both interrater and intrarater reliability, given current research. Strength of Recommendation: According to the Centre for Evidence Based Medicine levels of evidence, there is level 2b evidence for research into the reliability of the TJA. This evidence has been demonstrated in elite, adolescent, and college-level athletics in the United Kingdom, Spain, and the United States. The recommendation of level 2b was chosen because these studies utilized cohort design for interrater and intrarater reliability across populations. An overall grade of B was recommended because there were consistent level 2 studies.
Marissa L. Mason, Marissa N. Clemons, Kaylyn B. LaBarre, Nicole R. Szymczak and Nicole J. Chimera
Bryan L. Riemann and George J. Davies
Context: Previous investigations have examined the reliability, normalization, and underlying projection mechanics of the seated single-arm shot-put (SSASP) test. Although the test is believed to reflect test limb strength, there have been no assessments determining whether test performance is directly associated with upper-extremity strength. Objective: To determine the relationship between isokinetic pushing force and SSASP performance and conduct a method comparison analysis of limb symmetry indices between the 2 tests. Design: Controlled laboratory study. Setting: Biomechanics laboratory. Patients (or Other Participants): Twenty-four healthy and physically active men (n = 12) and women (n = 12). Intervention(s): Participants completed the SSASP and isokinetic pushing tests using their dominant and nondominant arms. Main Outcome Measures: SSASP distance and isokinetic peak force. Results: Significant moderate to strong relationships were revealed between the SSASP distances and isokinetic peak forces for both limbs. The Bland–Altman analysis results demonstrated significantly (P < .002) greater limb symmetry indices for the SSASP (both medicine balls) than the isokinetic ratios, with biases ranging from −0.094 to −0.159. The limits of agreement results yielded intervals ranging from ±0.241 to ±0.340 and ±0.202 to ±0.221 from the biases. Conclusions: These results support the notion that the SSASP test reflects upper-extremity strength. The incongruency of the limb symmetry indices between the 2 tests is likely reflective of the differences in the movement patterns and coordination requirements of the 2 tests.
Eoin Everard, Mark Lyons and Andrew J. Harrison
Context: Dynamic movement-based screens, such as the Landing Error Scoring System (LESS), are becoming more widely used in research and practical settings. Currently, 3 studies have examined the reliability of the LESS. These studies have reported good interrater and intrarater reliability. However, all 3 studies involved raters, who were founders of the LESS. Therefore, it is unclear whether the reliability reported reflects that which would be observed with practitioners without such specialized and intimate knowledge of the screen and only using the standardized set of instructions. Objective: To investigate the interrater and intrarater reliability of the final score and the individual scoring criteria of the LESS. Design: Reliability protocol. Setting: Controlled laboratory. Participants: Two raters scored 30 male participants (age = 21.8 [3.9] y; height = 1.75 [0.46] m; mass = 75.5 [6.6] kg) involved in a variety of college sports. Main Outcome Measure: Two raters using only the standardized scoring sheet assessed the interrater reliability of the total score and individual scoring criteria independently of each other. The principal author scored the videos again 6 weeks later for the intrarater reliability component of the study. Intervention: Participants performed a drop box landing from a 30-cm box was recorded with a video camera from the front and side views. Results: The intraclass coefficients interrater and intrarater reliability for the total scores were excellent (intraclass coefficients range = .95 and .96; SEM = 1.01 and 1.02). The individual scoring criteria of the LESS had between moderate and perfect agreement using kappa statistics (κ = .41–1.0). Conclusion: The final score and individual scoring criteria of the LESS have acceptable reliability with raters using the standardized scoring sheet. Practitioners using only the standardized scoring sheet should feel confident that the LESS is a reliable tool.
Caitlin Brinkman, Shelby E. Baez, Francesca Genoese and Johanna M. Hoch
Clinical Scenario: Patients after sports-related injury experience deficits in self-efficacy. Goal setting may be an appropriate psychoeducation technique to enhance self-efficacy after sports-related injury. Clinical Question: Does goal setting–enhanced rehabilitation improve self-efficacy compared with traditional rehabilitation alone in individuals with sports-related injury? Summary of Key Findings: Two randomized controlled trials were included. The two studies selected assessed changes in self-efficacy before and after a goal-setting intervention following sports-related injury in an athletic population. Both studies used the Sports Injury Rehabilitation Beliefs Survey to evaluate self-efficacy. Clinical Bottom Line: There is currently consistent, good-quality, patient-oriented evidence that supports the use of goal setting to improve self-efficacy in patients undergoing rehabilitation for sports-related injury compared with the standard of care group. Future research should examine optimal timing for the implementation of goal setting in order to enhance self-efficacy following sports-related injury. Strength of Recommendation: The grade of A is recommended by the Strength of Recommendation Taxonomy for consistent, good-quality, patient-oriented evidence.
Mhairi K. MacLean and Daniel P. Ferris
The authors tested 4 young healthy subjects walking with a powered knee exoskeleton to determine if it could reduce the metabolic cost of locomotion. Subjects walked with a backpack loaded and unloaded, on a treadmill with inclinations of 0° and 15°, and outdoors with varied natural terrain. Participants walked at a self-selected speed (average 1.0 m/s) for all conditions, except incline treadmill walking (average 0.5 m/s). The authors hypothesized that the knee exoskeleton would reduce the metabolic cost of walking uphill and with a load compared with walking without the exoskeleton. The knee exoskeleton reduced metabolic cost by 4.2% in the 15° incline with the backpack load. All other conditions had an increase in metabolic cost when using the knee exoskeleton compared with not using the exoskeleton. There was more variation in metabolic cost over the outdoor walking course with the knee exoskeleton than without it. Our findings indicate that powered assistance at the knee is more likely to decrease the metabolic cost of walking in uphill conditions and during loaded walking rather than in level conditions without a backpack load. Differences in positive mechanical work demand at the knee for varying conditions may explain the differences in metabolic benefit from the exoskeleton.
Zachary W. Bell, Scott J. Dankel, Robert W. Spitz, Raksha N. Chatakondi, Takashi Abe and Jeremy P. Loenneke
Context: The perceived tightness scale is suggested to be an effective method for setting subocclusive pressures with practical blood flow restriction. However, the reliability of this scale is unknown and is important as the reliability will ultimately dictate the usefulness of this method. Objective: To determine the reliability of the perceived tightness scale and investigate if the reliability differs by sex. Design: Within-participant, repeated-measures. Setting: University laboratory. Participants: Twenty-four participants (12 men and 12 women) were tested over 3 days. Main Outcome Measures: Arterial occlusion pressure (AOP) and the pressure at which the participants rated a 7 out of 10 on the perceived tightness scale in the upper arm and upper leg. Results: The percentage coefficient of variation for the measurement was approximately 12%, with no effect of sex in the upper (median δ [95% credible interval]: 0.016 [−0.741, 0.752]) or lower body (median δ [95% credible interval]: 0.266 [−0.396, 0.999]). This would produce an overestimation/underestimation of ∼25% from the mean perceived pressure in the upper body and ∼20% in the lower body. Participants rated pressures above their AOP for the upper body and below for the lower body. At the group level, there were differences in participants’ ratings for their relative AOP (7 out of 10) between day 1 and days 2 and 3 for the lower body, but no differences between sexes for the upper or lower body. Conclusions: The use of the perceived tightness scale does not provide reliable estimates of relative pressures over multiple visits. This method resulted in a wide range of relative AOPs within the same individual across days. This may preclude the use of this scale to set the pressure for those implementing practical blood flow restriction in the laboratory, gym, or clinic.
Mindi Fisher, Ryan Tierney, Anne Russ and Jamie Mansell
Clinical Question: In concussed patients, will having attention deficit hyperactivity disorder (ADHD) or learning difficulties (LD) versus not having ADHD or LD cause higher symptom severity scores or invalid baseline protocols? Clinical Bottom Line: Research supports the concept that there is a difference at baseline for individuals with ADHD and/or LD compared with those who do not.
Brittany M. Ingram, Melissa C. Kay, Christina B. Vander Vegt and Johna K. Register-Mihalik
Clinical Scenario: Current studies have identified body checking as the most common cause of sports-related concussion in ice hockey across all divisions and levels. As a result, many hockey organizations, particularly in youth sports, have implemented rules making body checking to the head, face, and/or neck illegal. Such a rule, in Canada, makes age 13 the first age in which individuals can engage in body checking. Despite these changes, effectiveness of their implementation on the incidence of concussion in Canadian male youth ice hockey players remains unclear. Clinical Question: What is the effect of body checking policy changes on concussion incidence in male youth ice hockey players? Summary of Key Findings: Of the 3 included studies, 2 studies reported a decrease in the incidence of concussion once a body checking policy change was implemented. The third study showed an increase; however, it is important to note that this may be due, in part, to increased awareness leading to better reporting of injuries. Clinical Bottom Line: Current evidence supports a relationship between body checking policy implementation and decreased concussion incidence; however, more research is needed to understand the long-term implications of policy change and the effects in other leagues. In addition, further data are needed to differentiate between increased concussion incidence resulting from concussion education efforts that may improve disclosure and increased concussion incidence as a direct result of policy changes. Strength of Recommendation: Grade B evidence exists that policy changes regarding body checking decrease concussion incidence in male youth ice hockey players.
Corey P. Ochs, Melissa C. Kay and Johna K. Register-Mihalik
Clinical Scenario: Collision sports are often at higher risk of concussion due to the physical nature and style of play. Typically, initial clinical recovery occurs within 7 to 10 days; however, even this time frame may result in significant time lost from play. Little has been done in previous research to analyze how individual game performance may be affected upon return to play postconcussion. Focused Clinical Question: Upon return-to-play clearance, how does sport-related concussion affect game performance of professional athletes in collision sports? Summary of Key Findings: All 3 studies included found no significant change in individual performance of professional collision-sport athletes upon returning to play from concussive injury. One of the studies indicated that there was no difference in performance for NFL athletes who did not miss a single game (returned within 7 d) and those who missed at least 1 game. One study indicated that although there was no change in performance of NFL players upon returning to play from sustained concussion, there was a decline in performance in the 2 weeks before the diagnosed injury and appearing on the injury report. The final study indicated that there was no difference in performance or style of play of NHL athletes who missed time due to concussive injury when compared with athletes who missed games for a noninjury factor. Clinical Bottom Line: There was no change in performance upon return from concussive injury suggesting that players appear to be acutely recovered from the respective concussion before returning to play. This suggests that current policies and management properly evaluate and treat concussed athletes of these professional sports. Strength of Recommendation: Grade C evidence exists that there is no change in individual game performance in professional collision-sport athletes before and after suffering a concussion.
Manuel Trinidad-Fernández, Manuel González-Sánchez and Antonio I. Cuesta-Vargas
Context: Several studies have shown that the kinematics of the scapula is altered in many disorders that affect the shoulder. Description of scapular motion in the chest continues to be a scientific and clinical challenge. Objective: To check the validity and reliability of a new, minimally invasive method of tracking the internal and external rotation of the scapula using ultrasound imaging combined with the signal provided by a 3-dimensional electromagnetic sensor. Design: A cross-sectional study with a repeated-measures descriptive test–retest design was employed to evaluate this new tracking method. The new method was validated in vitro and the reliability of data over repeated measures between scapula positions was calculated in vivo. Setting: University laboratory. Participants: A total of 30 healthy men and women. Main Outcome Measure: The validation of the scapula rotation tracking using the in vitro model was calculated by Pearson correlation test between a 2-dimensional cross-correlation algorithm of the new method and another software image. The reliability of the tracking of the scapula rotation was measured using the intraclass correlation coefficient. Results: In the validation in vitro, the correlation of rotations obtained by the 2 methods was good (r = .77, P = .01). The reliability in vivo had excellent results (intraclass correlation coefficient = .88; 95% confidence interval, .82–.93) in the test–retest analysis of 8 measures. The intrarater analysis of variance test showed no significant differences between the measures (P = .85, F = 0.46). Conclusion: Ultrasound imaging combined with a motion sensor to track the scapula has been shown to be a reliable and valid method for measuring internal and external rotation during separation of the upper limb.