Context: There is an increased emphasis on the need to capture and incorporate self-reported function to make clinical decisions when providing patient-centered care. Response shift (RS), or a change in an individual’s self-evaluation of a construct, may affect the accurate assessment of change in self-reported function throughout the course of rehabilitation. A systematic review of this phenomenon may provide valuable information regarding the accuracy of self-reported function. Objectives: To systematically locate and synthesize the existing evidence regarding RS during care for various orthopedic conditions. Evidence Acquisition: Electronic databases (PubMed, MEDLINE, CINAHL, SPORTDiscus, and Psychology & Behavioral Sciences Collection) were searched from inception to November 2016. Two investigators independently assessed methodological quality using the modified Downs and Black Quality Index. The quality of evidence was assessed using the Strength-of-Recommendation Taxonomy. The magnitude of RS was examined through effect sizes. Evidence Synthesis: Nine studies were included (7 high quality and 2 low quality) with a median Downs and Black Quality Index score of 81.25% (range = 56.25%–93.75%). Overall, the studies demonstrated weak to strong effect sizes (range = −1.58–0.33), indicating the potential for RS. Of the 36 point estimates calculated, 22 (61.11%), 2 (5.56%), and 12 (33.33%) were associated with weak, moderate negative, and strong negative effect sizes, respectively. Conclusions: There is grade B evidence that a weak RS, in which individuals initially underestimate their disability, may occur in people undergoing rehabilitation for an orthopedic condition. It is important for clinicians to be aware of the potential shift in their patients’ internal standards, as it can affect the evaluation of health-related quality of life changes during the care of orthopedic conditions. A shift in the internal standards of the patient can lead to subsequent misclassification of health-related quality of life changes that can adversely affect clinical decision making.
Cameron J. Powden, Matthew C. Hoch and Johanna M. Hoch
Megan N. Houston, Johanna M. Hoch and Matthew C. Hoch
Context: Postinjury, college athletes have reported elevated levels of fear. However, it is unclear how a history of ankle sprain impacts injury-related fear. Objective: The aim of this study was to determine if Fear-Avoidance Beliefs Questionnaire (FABQ) scores differ between college athletes with a history of a single ankle sprain, those with recurrent ankle sprains, and healthy controls. Design: Cross-sectional design. Setting: National Collegiate Athletic Association institutions. Patients: From a large database of college athletes, 75 participants with a history of a single ankle sprain, 44 with a history of recurrent ankle sprains (≥2), and 28 controls with no injury history were included. Main Outcome Measures: Participants completed an injury history questionnaire and the FABQ. On the injury history form, the participants were asked to indicate if they had ever sustained an ankle sprain and, if yes, to describe how many. FABQ scores ranged from 0 to 66 with higher scores representing greater fear. Results: Athletes with a history of recurrent ankle sprains (median, 28.00; interquartile range, 18.25–38.00) reported higher levels of fear than those with a history of a single ankle sprain (21.00; 8.00–31.00; P = .03; effect size = 0.199) and healthy controls (5.50; 0.00–25.00; P < .001; effect size = 0.431). Athletes with a history of a single sprain reported greater fear than healthy controls (P = .01, effect size = 0.267). Athletes with a history of a single sprain reported greater fear than healthy controls (P = .02, effect size = 0.23). Conclusions: College athletes with a history of ankle sprain exhibited greater levels of fear on the FABQ than healthy controls. These findings suggest that ankle sprains in general may increase injury-related fear and that those with a history of recurrent sprains are more vulnerable.
Matthew C. Hoch and Patrick O. McKeon
Matthew C. Hoch and Patrick O. McKeon
Megan N. Houston and Matthew C. Hoch
Emily H. Gabriel, Cameron J. Powden and Matthew C. Hoch
Context: The Y-Balance Test (YBT) and Star Excursion Balance Test (SEBT) are commonly used to detect deficits in dynamic postural control. There is a lack of literature on the differences in reach distances and efficiency of the tests. Objective: To compare the reach distances of the YBT and SEBT. An additional aim was to compare the time necessary to administer the 2 tests and utilize a discrete event simulation to determine the number of participants who could be screened within different scenarios. Design: Cross-sectional. Laboratory Patients: Twenty-four physically active individuals between the ages of 18–35 years volunteered to participate in this study (M/F: 11/13; age 22.78 [2.63] y, height 68.22 [4.32] cm, mass 173.27 [10.96] kg). Intervention: The participants reported to the laboratory on one occasion and performed the YBT and SEBT. The anterior, posteromedial, and posterolateral reach distances were recorded for each test. In addition, the time to administer each test was recorded in seconds. Main Outcome Measures: The average reach distances and time for each test were used for analysis. Paired t tests were utilized to compare the reach distances and time to administer the 2 tests. A discrete event simulation was used to determine how many participants could be screened using each test. Results: The anterior reach for the SEBT (64.52% [6.07%]) was significantly greater than the YBT (61.66% [6.37%]; P < .01). The administration time for the YBT (512.42 [123.97] s) was significantly longer than the administration time for the SEBT (364.96 [69.46] s; P < .01). The discrete event simulation revealed more participants could be screened using the SEBT when compared with the YBT for every situation. Conclusion: Scores on the anterior reach of the SEBT are larger when compared with the YBT. The discrete event simulation can successfully be used to determine how many participants could be screened with a certain amount of resources given the use of a specific test.
Francesca Genoese, Shelby E. Baez, Nicholas Heebner, Matthew C. Hoch and Johanna M. Hoch
Context: Deficits in reaction time, decreased self-reported knee function, and elevated levels of injury-related fear have been observed in individuals who sustain anterior cruciate ligament injury. Understanding the relationship between these variables may provide the impetus to further investigate effective intervention strategies to address these deficits in individuals after anterior cruciate ligament reconstruction (ACLR). Objective: To examine the relationship between injury-related fear and lower-extremity visuomotor reaction time (VMRT) in individuals with a history of ACLR. A secondary purpose was to determine the relationship between self-reported knee function and lower-extremity VMRT in individuals with a history of ACLR. Design: Cross-sectional study. Setting: Laboratory. Participants: Twenty participants between the ages of 18–35 years, with history of unilateral ACLR within the last 10 years, who injured their knee playing or training for organized or recreational sports. Main Outcome Measures: Scores on the athlete fear avoidance questionnaire, the fear-avoidance beliefs questionnaire (FABQ), the knee injury and osteoarthritis outcome score, and reaction time (in seconds) on the lower-extremity VMRT task using the FitLight Trainer™, bilaterally. Spearman Rho correlations examined the relationship between the dependent variables. Results: There was a moderate positive correlation between VMRT and FABQ-total (r = .62, P < .01), FABQ-sport (r = .56, P = .01), and FABQ-physical activity (r = .64, P < .01) for the injured limb. Correlations between FABQ scores and VMRT for the uninjured limb were weak positive correlations (r = .36–.41, P > .05). Weak correlations between the osteoarthritis outcome score subscales, athlete fear avoidance questionnaire, and VMRT were observed for the injured limb (P > .05). Conclusions: Individuals with a history of ACLR who exhibited elevated levels of injury-related fear demonstrated slower VMRT. There were no relationships between self-reported knee function and VMRT. Future research should explore interventions to address injury-related fear and VMRT in individuals after ACLR.
Matthew C. Hoch, Johanna M. Hoch and Megan N. Houston
The study objective was to develop a shortened version of the Foot and Ankle Ability Measure (FAAM) for individuals with chronic ankle instability (CAI). Forty individuals with CAI completed the FAAM Activities of Daily Living and Sport subscales and the Short Form-12. Analyses were completed for item reduction followed by dimensionality, coverage redundancy, and internal consistency of a reduced-item instrument. Validity was examined through correlations with the original FAAM and Short Form-12. A 12-item FAAM was created which demonstrated strong internal consistency and convergent/divergent validity. The Quick-FAAM may provide an alternative patient-reported outcome for CAI which requires less administration time.
Johanna M. Hoch, Shelby E. Baez, Robert J. Cramer and Matthew C. Hoch
Context: The modified Disablement in the Physically Active scale (mDPA) has become a commonly utilized patient-reported outcome instrument for physically active patients. However, the factor structure of this instrument has not been verified in individuals with chronic ankle instability (CAI). Furthermore, additional evidence examining the mDPA in individuals with CAI is warranted. Objective: The purpose of this study was to verify the factor structure of the mDPA and compare the physical summary component (PSC) and mental summary component (MSC) in those with and without CAI. Design: Cross-sectional. Setting: Laboratory. Participants: A total of 118 CAI and 81 healthy controls from a convenience sample participated. Intervention: Not applicable. Main Outcome Measures: All subjects completed the 16-item mDPA that included the PSC and MSC; higher scores represent greater disablement. To examine the model fit of the mDPA, a single-factor and 2-factor (PSC and MSC) structures were tested. Group differences were examined with independent t tests (P ≤ .05) and Hedges’ g effect sizes (ESs). Results: Model fit indices showed the 2-factor structure to possess adequate fit to the data, χ 2(101) = 275.58, P < .001, comparative-fit index = .91, root mean square error of approximation = .09 (95% confidence interval [CI], .08–.11), and standardized root mean square residual = .06. All items loaded significantly and in expected directions on respective subscales (λ range = .59–.87, all Ps < .001). The CAI group reported greater disablement as indicated from PSC (CAI: 11.45 [8.30] and healthy: 0.62 [1.80], P < .001, ES = 1.67; 95% CI, 1.33–1.99) and MSC (CAI: 1.75 [2.58] and healthy: 0.58 [1.46], P < .001, ES = 0.53; 95% CI, 0.24–0.82) scores. Conclusions: The 2-factor structure of the mDPA was verified. Individuals with CAI reported greater disablement on the PSC compared with healthy controls. The moderate ES on the MSC between groups warrants further investigation. Overall, these results indicate the mDPA is a generic patient-reported outcome instrument that can be utilized with individuals who have CAI.
Matthew C. Hoch, Lauren A. Welsch, Emily M. Hartley, Cameron J. Powden and Johanna M. Hoch
Context: The Y-Balance Test (YBT) is a dynamic balance assessment used as a preseason musculoskeletal screen to determine injury risk. While the YBT has demonstrated excellent test-retest reliability, it is unknown if YBT performance changes following participation in a competitive athletic season. Objective: Determine if a competitive athletic season affects YBT performance in field hockey players. Design: Pretest-posttest. Setting: Laboratory. Participants: 20 NCAA Division I women's field hockey players (age = 19.55 ± 1.30 y; height = 165.10 ± 5.277 cm; mass = 62.62 ± 4.64 kg) from a single team volunteered. Participants had to be free from injury throughout the entire study and participate in all athletic activities. Interventions: Participants completed data collection sessions prior to (preseason) and following the athletic season (postseason). Between data collections, participants competed in the fall competitive field hockey season, which was ~3 months in duration. During data collection, participants completed the YBT bilaterally. Main Outcome Measures: The independent variable was time (preseason, postseason) and the dependent variables were normalized reach distances (anterior, posteromedial, posterolateral, composite) and between-limb symmetry for each reach direction. Differences between preseason and postseason were examined using paired t tests (P ≤ .05) as well as Bland-Altman limits of agreement. Results: 4 players sustained a lower extremity injury during the season and were excluded from analysis. There were no significant differences between preseason and postseason reach distances for any reach directions on either limb (P ≥ .31) or in the between-limb symmetries (P ≥ .52). The limits of agreement analyses determined there was a low mean bias across measurements (≤1.67%); however, the 95% confidence intervals indicated there was high variability within the posterior reach directions over time (±4.75 to ± 14.83%). Conclusion: No changes in YBT performance were identified following a competitive field hockey season in Division I female athletes. However, the variability within the posterior reach directions over time may contribute to the limited use of these directions for injury risk stratification.