Search Results

You are looking at 1 - 10 of 51 items for :

  • "orthopedics" x
Clear All
Open access

Cameron J. Powden, Matthew C. Hoch and Johanna M. Hoch

the ability to use PRO data in clinical decision making. Limitations of Review This systematic review was not without limitations. The electronic search was conducted within databases thought to be most relevant to RS and orthopedics. It is possible that articles relevant to this review were not

Restricted access

Thomas G. Ribble, Michael H. Santare and Freeman Miller

Finite element models of the proximal femur at birth, 2 years of age, and at 8 years of age were constructed to investigate stress patterns under different loading conditions. These loading conditions represent typical activities of a normal developing child and abnormal activity associated with muscle spasticity. The hypothesis is that the shear stresses in the growth plate correlate with the neckshaft angle as associated with valgus and normal development. Loads for the finite element models were derived from a separate muscle model used to calculate the forces across the hip joint for an arbitrary subject and activity. Results show there is an inverse relationship between the relative magnitude of the shear stress in the growth plate and the developing neck-shaft angle. The relatively high shear stresses generated by normal activity in the 2-year-old’s growth plate correlate with the decrease in neck-shaft angle that accompanies normal development. Alternatively, lower shear stresses are generated in the growth plate by loading conditions representing spasticity. These lower magnitude shear stresses correlate with a valgus deformity, which is often observed clinically.

Restricted access

John Andrew Badagliacco and Andrew Karduna

Context: The relationship between overhead throwing and its effect on proprioception is not well understood. It is important to gain a better understanding of how these are related, to protect overhead athletes from an increased risk of injury. Objective: To investigate proprioceptive alterations in the overhead thrower’s shoulder. Design: Cross-sectional study. Independent variables are limb (dominant and nondominant), group (thrower or control), and target angle. Dependent variables are joint position sense and range of motion. Setting: An orthopedic biomechanics lab and university athletic training facility. Participants: Twelve Division I baseball pitchers and 13 nonthrowing control subjects. Intervention: Shoulder proprioception was assessed using an active joint repositioning task administered with an iPod Touch. Main Outcome Measure: Root mean square error and constant error of repositioning angles were used to assess accuracy and directional patterns, respectively. Results: Both groups demonstrated significantly higher joint acuity at the 80° external rotation target angle compared with 60° (1.5° [0.5°], P = .01). There were no differences in accuracy between groups. Constant error revealed differing repositioning patterns between limbs for the pitchers and also between groups for the dominant side. Although the throwing shoulder overshot the target angles by 0.4°, all nonthrowing shoulders undershot by an average of 2.7°. Conclusions: There is no difference in shoulder joint position sense accuracy between throwers and nonthrowers, although both groups display increased accuracy closer to their end range of external rotation. The throwing shoulder demonstrates a different repositioning pattern, overshooting the desired target angle, while all other shoulders undershoot.

Restricted access

Tristan Rodik and Brendon McDermott

Clinical Scenario:

Lateral epicondylitis (LE) is a relatively common pathology capable of producing chronic debilitation in a variety of patients. A newer treatment for orthopedic conditions is platelet-rich plasma (PrP) local injection.

Focused Clinical Question:

Is PrP a more appropriate injection therapy for LE than other common injections such as corticosteroid or whole blood?

Summary of Key Findings:

Four studies were included: 1 randomized controlled trial (RCT), 2 double-blind RCTs, and 1 cohort study. Two studies involved comparisons of PrP injection to corticosteroid injection. One of the studies involved a 2-y follow-up while another involved a 1-y follow-up. Another study involved the comparison of PrP injection with whole-blood injection with a 6-mo follow-up. The final study included a PrP-injection group and control group. The 2 studies involving PrP vs corticosteroid injections with 2-y and 1-y follow-ups both favored PrP over corticosteroid injection in terms of pain reduction and function increases. The third study favored PrP injections over whole-blood injections at 6 mo regarding pain reduction. All studies demonstrated significant improvements with PrP over comparison injections or no injection.

Clinical Bottom Line:

PrP injections provide more favorable pain and function outcomes than whole blood and corticosteroid injections for 1–2 y after injection.

Strength of Recommendation:

Consistent findings from RCTs suggest level 1b evidence in support of PrP injection as a treatment for LE.

Restricted access

Sara J. Golec and Alison R. Valier

Clinical Scenario: Health care clinicians are encouraged to practice according to the best available evidence for the purpose of improving patient outcomes. Clinical practice guidelines are one form of evidence that has been developed to enhance the care that patients receive for particular conditions. Low back pain is a common condition in rehabilitation medicine that places a significant financial burden on the healthcare system. Patients with low back pain often suffer great pain and disability that can last a long time, making effective and efficient care a priority. Several guidelines for the treatment of low back pain have been created; however, there is no consensus on whether following these guidelines will positively reduce the pain and disability experienced by patients. Clinical Question: Does adherence to clinical practice guidelines for patients with nonspecific low back pain reduce pain and disability? Summary of Key Findings: A total of 4 studies of level 3 or higher were found. Four studies noted an improvement in disability following guidelines adherent care. Two studies reported greater reduction in pain with guideline adherent care and 2 did not.  Clinical Bottom Line: Moderate evidence exists to support adherence to clinical practice guidelines to improve pain and disability ratings in patients with nonspecific low back pain.

Restricted access

Johanna M. Hoch, Cori W. Sinnott, Kendall P. Robinson, William O. Perkins and Jonathan W. Hartman

Context: There is a lack of literature to support the diagnostic accuracy and cut-off scores of commonly used patient-reported outcome measures (PROMs) and clinician-oriented outcomes such as postural-control assessments (PCAs) when treating post-ACL reconstruction (ACLR) patients. These scores could help tailor treatments, enhance patient-centered care and may identify individuals in need of additional rehabilitation. Objective: To determine if differences in 4-PROMs and 3-PCAs exist between post-ACLR and healthy participants, and to determine the diagnostic accuracy and cut-off scores of these outcomes. Design: Case control. Setting: Laboratory. Participants: A total of 20 post-ACLR and 40 healthy control participants. Main Outcome Measures: The participants completed 4-PROMs (the Disablement in the Physically Active Scale [DPA], The Fear-Avoidance Belief Questionnaire [FABQ], the Knee Osteoarthritis Outcomes Score [KOOS] subscales, and the Tampa Scale of Kinesiophobia [TSK-11]) and 3-PCAs (the Balance Error Scoring System [BESS], the modified Star Excursion Balance Test [SEBT], and static balance on an instrumented force plate). Mann-Whitney U tests examined differences between groups. Receiver operating characteristic (ROC) curves were employed to determine sensitivity and specificity. The Area Under the Curve (AUC) was calculated to determine the diagnostic accuracy of each instrument. The Youdin Index was used to determine cut-off scores. Alpha was set a priori at P < 0.05. Results: There were significant differences between groups for all PROMs (P < 0.05). There were no differences in PCAs between groups. The cut-off scores should be interpreted with caution for some instruments, as the scores may not be clinically applicable. Conclusions: Post-ACLR participants have decreased self-reported function and health-related quality of life. The PROMs are capable of discriminating between groups. Clinicians should consider using the cut-off scores in clinical practice. Further use of the instruments to examine detriments after completion of standard rehabilitation may be warranted.

Restricted access

Lori A. Michener

Outcome measures can be classified as clinician rated and patient rated. Clinician-rated measures predominantly assess impairments, whereas patient-rated measures, also known as patient-based measures, are designed to evaluate the impact of the injury on a patient’s daily activities, work, and recreation. Currently, there is a greater reliance on clinician-rated impairment measures for clinical decision making, specifically with treatment planning and assessing outcomes of care. To comprehensively evaluate the effect of an injury, patient-rated outcome measures must be used because they allow for the assessment of a patient’s ability to perform daily activities and participate in work and recreation that is affected by an injury. Clinician-rated impairment measures should be used to guide the development of a treatment program, and patient-rated measures should be used for both treatment-program development and assessing treatment outcomes in daily clinical practice. The purposes of this article are to describe patient- and clinician-rated outcome measures and to provide guidance and illustrate the benefits of the use of these measures in clinical decision making and documenting outcomes of care.

Restricted access

Michael S. Guss, John P. Begly, Austin J. Ramme, David P. Taormina, Michael E. Rettig and John T. Capo

Context: Major League Baseball (MLB) players are at risk of hook of hamate fractures. There is a paucity of data assessing the effect of a hook of hamate fracture on MLB players’ future athletic performance. Objective: To determine if MLB players who sustain hook of hamate fractures demonstrate decreased performance upon return to competition when compared with their performance before injury and that of their control-matched peers. Design: Retrospective case-control design. Setting: Retrospective database study. Participants: 18 MLB players who sustained hook of hamate fractures. Methods: Data for 18 MLB players with hook of hamate fractures incurred over 26 seasons (1989–2014) were obtained from injury reports, press releases, and player profiles ( and Player age, position, number of years in the league, mechanism of injury, and treatment were recorded. Individual season statistics for the 2 seasons immediately prior to injury and the 2 seasons after injury for the main performance variable—Wins Above Replacement—were obtained. Eighteen controls matched by player position, age, and performance statistics were identified. A performance comparison of the cohorts was performed. Main Outcome Measures: Postinjury performance compared with preinjury performance and matched-controls. Results: Mean age at the time of injury was 25.1 years with a mean of 4.4 seasons of MLB experience prior to injury. All injuries were sustained to their nondominant batting hand. All players underwent operative intervention. There was no significant change in Wins Above Replacement or isolated power when preinjury and postinjury performance were compared. When compared with matched-controls, no significant decline in performance in Wins Above Replacement the first season and second season after injury was found. Conclusion: MLB players sustaining hook of hamate fractures can reasonably expect to return to their preinjury performance levels following operative treatment.

Restricted access

Jacopo A. Vitale, Giuseppe Banfi, Andrea Galbiati, Luigi Ferini-Strambi and Antonio La Torre

Purpose: To evaluate actigraphy-based sleep quality and perceived recovery before and after a night game in top-level volleyball athletes. Methods: Data on sleep parameters were collected by actigraphy for 3 consecutive nights with 24 elite athletes (12 male and 12 female; mean age [SD] = 26.0 [3.4] y, age range = 20–33 y) during the competitive season 2016–17. Data from 1 night before and 2 nights after an official night match were studied, and athletes’ subjective perception of recovery was evaluated by the Total Quality Recovery scale. The following actigraphic parameters were studied: time in bed, sleep latency, sleep efficiency, wake after sleep onset, total sleep time, immobility time, moving time, and fragmentation index. Results: The analysis highlighted significant differences for all sleep variables. Total sleep time was lower the first night after the match compared with prematch sleep (P = .02) and the second night (P = .0009) after night competition (P = .0001, F 2,23 = 22.93, ηp2 = .66). Similarly, sleep efficiency was lower immediately after the night competition compared with both prematch values (P = .03) and the second night (P = .0003) after competition (P = .0005, F 2,23 = 8.93, ηp2 = .44). The same differences were observed in the perceived recovery values (P = .001, F 2,23 = 13.37, ηp2 = .54). Conclusions: Coaches and medical staff should use these findings to develop a greater knowledge of how sleep differs during different phases of competition and to implement behavioral and sleep-hygiene strategies in top-level athletes.

Restricted access

İlker Eren, Nazan Canbulat, Ata Can Atalar, Şule Meral Eren, Ayla Uçak, Önder Çerezci and Mehmet Demirhan

Context: Ideal rehabilitation method following arthroscopic capsulolabral repair surgery for anterior shoulder instability has not been proven yet. Although rapid or slow protocols were compared previously, home- or hospital-based protocols were not questioned before. Objective: The aim of this prospective unrandomized controlled clinical trial is to compare the clinical outcomes of home-based and hospital-based rehabilitation programs following arthroscopic Bankart repair. Design: Nonrandomized controlled trial. Setting: Orthopedics and physical therapy units of a single institution. Patients: Fifty-four patients (49 males and 5 females) with an average age of 30.5 (9.1) years, who underwent arthroscopic capsulolabral repair and met the inclusion criteria, with at least 1-year follow-up were allocated into 2 groups: home-based (n = 33) and hospital-based (n = 21) groups. Interventions: Both groups received identical rehabilitation programs. Patients in the home-based group were called for follow-up every 3 weeks. Patients in the hospital-based group admitted for therapy every other day for a total of 6 to 8 weeks. Both groups were followed identically after the eighth week and the rehabilitation program continued for 6 months. Main Outcome Measures: Clinical outcomes were assessed using Disabilities of Arm Shoulder Hand, Constant, and Rowe scores. Mann–Whitney U test was used to compare the results in both groups. Wilcoxon test was used for determining the progress in each group. Results: Groups were age and gender matched (P = .61, P = .69). Average number of treatment sessions was 13.8 (7.3) for patients in the hospital-based group. Preoperative Disabilities of Arm Shoulder Hand (27.46 [11.81] vs 32.53 [16.42], P = .22), Constant (58.23 [14.23] vs 54.17 [10.46], P = .13), and Rowe (51.72 [15.36] vs 43.81 [19.16], P = .12) scores were similar between groups. Postoperative scores at sixth month were significantly improved in each group (P = .001, P = .001, and P = .001). No significant difference was observed between 2 groups regarding clinical scores in any time point. Conclusions: We have, therefore, concluded that a controlled home-based exercise program is as effective as hospital-based rehabilitation following arthroscopic capsulolabral repair for anterior shoulder instability.