Search Results

You are looking at 1 - 10 of 42 items for

  • Author: Matthew C. Hoch x
  • Refine by Access: All Content x
Clear All Modify Search
Restricted access

Megan N. Houston, Johanna M. Hoch, and Matthew C. Hoch

Context: Postinjury, college athletes have reported elevated levels of fear. However, it is unclear how a history of ankle sprain impacts injury-related fear. Objective: The aim of this study was to determine if Fear-Avoidance Beliefs Questionnaire (FABQ) scores differ between college athletes with a history of a single ankle sprain, those with recurrent ankle sprains, and healthy controls. Design: Cross-sectional design. Setting: National Collegiate Athletic Association institutions. Patients: From a large database of college athletes, 75 participants with a history of a single ankle sprain, 44 with a history of recurrent ankle sprains (≥2), and 28 controls with no injury history were included. Main Outcome Measures: Participants completed an injury history questionnaire and the FABQ. On the injury history form, the participants were asked to indicate if they had ever sustained an ankle sprain and, if yes, to describe how many. FABQ scores ranged from 0 to 66 with higher scores representing greater fear. Results: Athletes with a history of recurrent ankle sprains (median, 28.00; interquartile range, 18.25–38.00) reported higher levels of fear than those with a history of a single ankle sprain (21.00; 8.00–31.00; P = .03; effect size = 0.199) and healthy controls (5.50; 0.00–25.00; P < .001; effect size = 0.431). Athletes with a history of a single sprain reported greater fear than healthy controls (P = .01, effect size = 0.267). Athletes with a history of a single sprain reported greater fear than healthy controls (P = .02, effect size = 0.23). Conclusions: College athletes with a history of ankle sprain exhibited greater levels of fear on the FABQ than healthy controls. These findings suggest that ankle sprains in general may increase injury-related fear and that those with a history of recurrent sprains are more vulnerable.

Open access

Cameron J. Powden, Matthew C. Hoch, and Johanna M. Hoch

Context: There is an increased emphasis on the need to capture and incorporate self-reported function to make clinical decisions when providing patient-centered care. Response shift (RS), or a change in an individual’s self-evaluation of a construct, may affect the accurate assessment of change in self-reported function throughout the course of rehabilitation. A systematic review of this phenomenon may provide valuable information regarding the accuracy of self-reported function. Objectives: To systematically locate and synthesize the existing evidence regarding RS during care for various orthopedic conditions. Evidence Acquisition: Electronic databases (PubMed, MEDLINE, CINAHL, SPORTDiscus, and Psychology & Behavioral Sciences Collection) were searched from inception to November 2016. Two investigators independently assessed methodological quality using the modified Downs and Black Quality Index. The quality of evidence was assessed using the Strength-of-Recommendation Taxonomy. The magnitude of RS was examined through effect sizes. Evidence Synthesis: Nine studies were included (7 high quality and 2 low quality) with a median Downs and Black Quality Index score of 81.25% (range = 56.25%–93.75%). Overall, the studies demonstrated weak to strong effect sizes (range = −1.58–0.33), indicating the potential for RS. Of the 36 point estimates calculated, 22 (61.11%), 2 (5.56%), and 12 (33.33%) were associated with weak, moderate negative, and strong negative effect sizes, respectively. Conclusions: There is grade B evidence that a weak RS, in which individuals initially underestimate their disability, may occur in people undergoing rehabilitation for an orthopedic condition. It is important for clinicians to be aware of the potential shift in their patients’ internal standards, as it can affect the evaluation of health-related quality of life changes during the care of orthopedic conditions. A shift in the internal standards of the patient can lead to subsequent misclassification of health-related quality of life changes that can adversely affect clinical decision making.

Restricted access

Courtney J. DeFeo, Nathan Morelli, and Matthew C. Hoch

Clinical Scenario: Postural control deficits are one of the most common impairments associated with sport-related concussion. The Modified Balance Error Scoring System (mBESS) is one of the current standard clinical measures for assessing these deficits; however, it is dependent upon observer-rated measurements. Advancements in inertial measurement units (IMUs) lend themselves to be a viable option in objectifying postural control assessments, such as the mBESS. Clinical Question: Are IMU-based measures of the mBESS more effective than observer-rated measures of the mBESS in identifying patients with sport-related concussion? Summary of Key Findings: Following a systematic search, three studies were included. One study compared observer-rated measures of the Balance Error Scoring System and mBESS to instrumented measures of both tests and determined that the instrumented mBESS had the highest diagnostic accuracy. The results of the second study determined that IMU-based measures were successful in both classifying group and identifying task errors. The final study found that using IMUs increased sensitivity of the mBESS, specifically the double-limb stance, to group classification. Clinical Bottom Line: Instrumentation of the mBESS using IMUs provides more objective and sensitive measures of postural control in patients with SRC. Strength of Recommendation: Due to the consistent, good-quality evidence used to answer this critically appraised topic, the grade of A is recommended by the Strength of Recommendation Taxonomy.

Restricted access

Emily H. Gabriel, Cameron J. Powden, and Matthew C. Hoch

Context: The Y-Balance Test (YBT) and Star Excursion Balance Test (SEBT) are commonly used to detect deficits in dynamic postural control. There is a lack of literature on the differences in reach distances and efficiency of the tests. Objective: To compare the reach distances of the YBT and SEBT. An additional aim was to compare the time necessary to administer the 2 tests and utilize a discrete event simulation to determine the number of participants who could be screened within different scenarios. Design: Cross-sectional. Laboratory Patients: Twenty-four physically active individuals between the ages of 18–35 years volunteered to participate in this study (M/F: 11/13; age 22.78 [2.63] y, height 68.22 [4.32] cm, mass 173.27 [10.96] kg). Intervention: The participants reported to the laboratory on one occasion and performed the YBT and SEBT. The anterior, posteromedial, and posterolateral reach distances were recorded for each test. In addition, the time to administer each test was recorded in seconds. Main Outcome Measures: The average reach distances and time for each test were used for analysis. Paired t tests were utilized to compare the reach distances and time to administer the 2 tests. A discrete event simulation was used to determine how many participants could be screened using each test. Results: The anterior reach for the SEBT (64.52% [6.07%]) was significantly greater than the YBT (61.66% [6.37%]; P < .01). The administration time for the YBT (512.42 [123.97] s) was significantly longer than the administration time for the SEBT (364.96 [69.46] s; P < .01). The discrete event simulation revealed more participants could be screened using the SEBT when compared with the YBT for every situation. Conclusion: Scores on the anterior reach of the SEBT are larger when compared with the YBT. The discrete event simulation can successfully be used to determine how many participants could be screened with a certain amount of resources given the use of a specific test.

Restricted access

Megan N. Houston and Matthew C. Hoch

Open access

Amy R. Barchek, Shelby E. Baez, Matthew C. Hoch, and Johanna M. Hoch

Clinical Scenario: Physical activity is vital for human health. Musculoskeletal injury may inhibit adults from participating in physical activity, and this amount may be less than adults without a history of musculoskeletal injury. Clinical Question: Do individuals with a history of ankle or knee musculoskeletal injury participate in less objectively measured physical activity compared with healthy controls? Summary of Key Findings: Four studies were included. Two studies concluded patients who have undergone an anterior cruciate ligament reconstruction (ACLR) spent less time in moderate to vigorous physical activity levels when compared with healthy controls, but still achieved the daily recommended amount of physical activity. One study determined that participants with CAI took fewer steps per day compared with the control group. The fourth study determined patients with patellofemoral pain were less physically active than healthy controls as they took fewer steps per day and spent less time participating in mild and high activity. Clinical Bottom Line: There is consistent, high quality evidence that demonstrates individuals with a history of ankle or knee musculoskeletal injury participate in less objectively measured physical activity compared with healthy individuals. Strength of Recommendation: Due to nature of study designs of the included articles in this critically appraised topic, we recommend a grade of 3B.

Restricted access

Matthew C. Hoch, Lauren A. Welsch, Emily M. Hartley, Cameron J. Powden, and Johanna M. Hoch

Context: The Y-Balance Test (YBT) is a dynamic balance assessment used as a preseason musculoskeletal screen to determine injury risk. While the YBT has demonstrated excellent test-retest reliability, it is unknown if YBT performance changes following participation in a competitive athletic season. Objective: Determine if a competitive athletic season affects YBT performance in field hockey players. Design: Pretest-posttest. Setting: Laboratory. Participants: 20 NCAA Division I women's field hockey players (age = 19.55 ± 1.30 y; height = 165.10 ± 5.277 cm; mass = 62.62 ± 4.64 kg) from a single team volunteered. Participants had to be free from injury throughout the entire study and participate in all athletic activities. Interventions: Participants completed data collection sessions prior to (preseason) and following the athletic season (postseason). Between data collections, participants competed in the fall competitive field hockey season, which was ~3 months in duration. During data collection, participants completed the YBT bilaterally. Main Outcome Measures: The independent variable was time (preseason, postseason) and the dependent variables were normalized reach distances (anterior, posteromedial, posterolateral, composite) and between-limb symmetry for each reach direction. Differences between preseason and postseason were examined using paired t tests (P ≤ .05) as well as Bland-Altman limits of agreement. Results: 4 players sustained a lower extremity injury during the season and were excluded from analysis. There were no significant differences between preseason and postseason reach distances for any reach directions on either limb (P ≥ .31) or in the between-limb symmetries (P ≥ .52). The limits of agreement analyses determined there was a low mean bias across measurements (≤1.67%); however, the 95% confidence intervals indicated there was high variability within the posterior reach directions over time (±4.75 to ± 14.83%). Conclusion: No changes in YBT performance were identified following a competitive field hockey season in Division I female athletes. However, the variability within the posterior reach directions over time may contribute to the limited use of these directions for injury risk stratification.

Restricted access

Johanna M. Hoch, Shelby E. Baez, Robert J. Cramer, and Matthew C. Hoch

Context: The modified Disablement in the Physically Active scale (mDPA) has become a commonly utilized patient-reported outcome instrument for physically active patients. However, the factor structure of this instrument has not been verified in individuals with chronic ankle instability (CAI). Furthermore, additional evidence examining the mDPA in individuals with CAI is warranted. Objective: The purpose of this study was to verify the factor structure of the mDPA and compare the physical summary component (PSC) and mental summary component (MSC) in those with and without CAI. Design: Cross-sectional. Setting: Laboratory. Participants: A total of 118 CAI and 81 healthy controls from a convenience sample participated. Intervention: Not applicable. Main Outcome Measures: All subjects completed the 16-item mDPA that included the PSC and MSC; higher scores represent greater disablement. To examine the model fit of the mDPA, a single-factor and 2-factor (PSC and MSC) structures were tested. Group differences were examined with independent t tests (P ≤ .05) and Hedges’ g effect sizes (ESs). Results: Model fit indices showed the 2-factor structure to possess adequate fit to the data, χ 2(101) = 275.58, P < .001, comparative-fit index = .91, root mean square error of approximation = .09 (95% confidence interval [CI], .08–.11), and standardized root mean square residual = .06. All items loaded significantly and in expected directions on respective subscales (λ range = .59–.87, all Ps < .001). The CAI group reported greater disablement as indicated from PSC (CAI: 11.45 [8.30] and healthy: 0.62 [1.80], P < .001, ES = 1.67; 95% CI, 1.33–1.99) and MSC (CAI: 1.75 [2.58] and healthy: 0.58 [1.46], P < .001, ES = 0.53; 95% CI, 0.24–0.82) scores. Conclusions: The 2-factor structure of the mDPA was verified. Individuals with CAI reported greater disablement on the PSC compared with healthy controls. The moderate ES on the MSC between groups warrants further investigation. Overall, these results indicate the mDPA is a generic patient-reported outcome instrument that can be utilized with individuals who have CAI.