Search Results

You are looking at 1 - 10 of 15 items for

  • Author: Andrew Murray x
Clear All Modify Search
Open access

Andrew Murray

While historically adolescents were removed from their parents to prepare to become warriors, this process repeats itself in modern times but with the outcome being athletic performance. This review considers the process of developing athletes and managing load against the backdrop of differing approaches of conserving and maximizing the talent available. It acknowledges the typical training “dose” that adolescent athletes receive across a number of sports and the typical “response” when it is excessive or not managed appropriately. It also examines the best approaches to quantifying load and injury risk, acknowledging the relative strengths and weaknesses of subjective and objective approaches. Making evidence-based decisions is emphasized, while the appropriate monitoring techniques are determined by both the sporting context and individual situation. Ultimately a systematic approach to training-load monitoring is recommended for adolescent athletes to both maximize their athletic development and allow an opportunity for learning, reflection, and enhancement of performance knowledge of coaches and practitioners.

Restricted access

Andrew M. Murray and Matthew C. Varley

Purpose:

To investigate the influence of score line, level of opposition, and timing of substitutes on the activity profile of rugby sevens players and describe peak periods of activity.

Methods:

Velocity and distance data were measured via 10-Hz GPS from 17 international-level male rugby sevens players on 2–20 occasions over 4 tournaments (24 matches). Movement data were reported as total distance (TD), high-speed-running distance (HSR, 4.17−10.0 m/s), and the occurrence of maximal accelerations (Accel, ≥2.78 m/s2). A rolling 1-min sample period was used.

Results:

Regardless of score line or opponent ranking there was a moderate to large reduction in average and peak TD and HSR between match halves. A close halftime score line was associated with a greater HSR distance in the 1st minute of the 1st and 2nd halves compared with when winning. When playing against higher-compared with lower-ranked opposition, players covered moderately greater TD in the 1st minute of the 1st half (difference = 26%; 90% confidence limits = 6, 49). Compared with players who played a full match, substitutes who came on late in the 2nd half had a higher average HSR and Accel by a small magnitude (31%; 5, 65 vs 34%; 6, 69) and a higher average TD by a moderate magnitude (16%; 5, 28).

Conclusions:

Match score line, opposition, and substitute timing can influence the activity profile of rugby sevens players. Players are likely to perform more running against higher opponents and when the score line is close. This information may influence team selection.

Restricted access

Hugh H.K. Fullagar, Andrew Govus, James Hanisch and Andrew Murray

Purpose:

To investigate the recovery time course of customized wellness markers (sleep, soreness, energy, and overall wellness) in response to match play in American Division I-A college football players.

Methods:

A retrospective research design was used. Wellness data were collected and analyzed for 2 American college football seasons. Perceptions of soreness, sleep, energy, and overall wellness were obtained for the day before each game (GD–1) and the days after each game (GD+2, GD+3, and GD+4). Standardized effect-size (ES) analyses ± 90% confidence intervals were used to interpret the magnitude of the mean differences between all time points for the start, middle, and finish of the season, using the following qualitative descriptors: 0–0.19 trivial, 0.2–0.59 small, 0.6–1.19 moderate, 1.2–1.99 large, <2.0 very large.

Results:

Overall wellness showed small ES reductions on GD+2 (d = 0.22 ± 0.09, likely [94.8%]), GD+3 (d = 0.37 ± 0.15, very likely), and GD+4 (d = 0.29 ± 0.12, very likely) compared with GD–1. There were small ES reductions for soreness between GD–1 and GD+2, GD+3, and GD +4 (d = 0.21 ± 0.09, likely, d = 0.29 ± 0.12, very likely, and 0.30 ± 0.12, very likely, respectively). Small ES reductions were also evident between GD–1 and GD+3 (d = 0.21 ± 0.09, likely) for sleep. Feelings of energy showed small ESs on GD+3 (d = 0.27 ± 0.11, very likely) and GD+4 (d = 0.22 ± 0.09, likely) compared with GD–1.

Conclusion:

All wellness markers were likely to very likely worse on GD+3 and GD+4 than on GD–1. These findings show that perceptual wellness takes longer than 4 d to return to pregame levels and thus should be considered when prescribing training and/or recovery.

Restricted access

Hugh H.K. Fullagar, Robert McCunn and Andrew Murray

While there are various avenues for performance improvement in college American football (AF), there is no comprehensive evaluation of the collective array of resources around performance, physical conditioning, and injury and training/game characteristics to guide future research and inform practitioners. Accordingly, the aim of the present review was to provide a current examination of these areas in college AF. Recent studies show that there is a wide range of body compositions and strength characteristics between players, which appear to be influenced by playing position, level of play, training history/programming, and time of season. Collectively, game demands may require a combination of upper- and lower-body strength and power production, rapid acceleration (positive and negative), change of direction, high running speed, high-intensity and repetitive collisions, and muscle-strength endurance. These may be affected by the timing of and between-plays and/or coaching style. AF players appear to possess limited nutrition and hydration practices, which may be disadvantageous to performance. AF injuries appear due to a multitude of factors—strength, movement quality, and previous injury—while there is also potential for extrinsic factors such as playing surface type, travel, time of season, playing position, and training load. Future proof-of-concept studies are required to determine the quantification of game demands with regard to game style, type of opposition, and key performance indicators. Moreover, more research is required to understand the efficacy of recovery and nutrition interventions. Finally, the assessment of the relationship between external/internal-load constructs and injury risk is warranted.

Restricted access

Neil Gibson, James White, Mhari Neish and Andrew Murray

Purpose:

The study aimed to assess whether exposure to ischemic preconditioning (IPC) in a trained population would affect land-based maximal sprinting performance over 30 m.

Methods:

Twenty-five well-trained participants regularly involved in invasion-type team-sport events were recruited to take part in a randomized crossover study design. Participants underwent both an IPC and a placebo treatment involving 3 periods of 5-min occlusion applied unilaterally (3 × 5-min occlusion to each leg) at either 220 mmHg or 50 mmHg, respectively. Each period of occlusion was followed by 5 min of reperfusion. After treatment, 3 maximal sprints over a distance of 30 m were undertaken from a standing start interspersed with 1-min recovery. Split times were recorded at 10, 20, and 30 m.

Results:

No significant effects of the IPC treatment were observed on sprint speed (P < .05) at any of the split timings; however, a small and negative effect was observed in female participants. Calculated effect sizes of the treatment were found to be trivial (<0.2).

Conclusions:

Results from the current study suggest there to be no benefit to team-sport players in using IPC as a means of enhancing sprint performance over a distance of 30 m. While IPC has been shown to be beneficial to sprint activities in other sports such as swimming, further research is required to elucidate whether this is the case over distances associated with land-based events in track and field or in events reliant on repeated-sprint ability.

Restricted access

Sarah Dempster, Rhiannon Britton, Andrew Murray and Ricardo J. S. Costa

The aims of this study were to assess the dietary intake and monitor self-reported recovery quality and clinical symptomology of a male ultra-endurance runner who completed a multiday ultra-endurance running challenge covering 4,254 km from North Scotland to the Moroccan Sahara desert over 78 consecutive days. Food and fluid intakes were recorded and analyzed through dietary analysis software. Body mass (BM) was determined before and after running each day, and before sleep. Clinical symptomology and perceived recovery quality were recorded each day. Whole blood hemoglobin and serum ferritin were determined before and after the challenge. Total daily energy (mean ± SD: 23.2 ± 3.2MJ·day−1) and macronutrient intake (182 ± 31g·day−1 protein, 842 ± 115g·day−1 carbohydrate, 159 ± 55 g·day−1 fat) met consensus nutritional guidelines for endurance performance. Total daily water intake through foods and fluids was 4.8 ± 2.0L·day−1. Water and carbohydrate intake rates during running were 239 ± 143ml·h−1 and 56 ± 19g·h−1, respectively. Immediately after running, carbohydrate and protein intakes were 1.3 ± 1.0g·kg BM−1 and 0.4 ± 0.2g·kg BM−1, respectively. Daily micronutrient intakes ranged from 109 to 662% of UK RNIs. Prerunning BM was generally maintained throughout. Overall exercise-induced BM loss averaged 0.8 ± 1.0%; although BM losses of ≥ 2% occurred in the latter stages, a reflection of the warmer climate. Varying degrees of self-reported perceived recovery quality and clinical symptomology occurred throughout the challenge. This case study highlights oscillations in dietary habits along 78 consecutive days of ultra-endurance running, dependent on changes in ambient conditions and course topography. Nevertheless, nutrition and hydration status were maintained throughout the challenge. Despite dietary iron intake above RNI and iron supplementation, this alone did not prevent deficiency symptoms.

Restricted access

Nick B. Murray, Tim J. Gabbett and Andrew D. Townshend

Objectives: To examine the difference between absolute and relative workloads, injury likelihood, and the acute:chronic workload ratio (ACWR) in elite Australian football. Design: Single-cohort, observational study. Methods: Forty-five elite Australian football players from 1 club participated. Running workloads of players were tracked using Global Positioning System technology and were categorized using either (1) absolute, predefined speed thresholds or (2) relative, individualized speed thresholds. Players were divided into 3 equal groups based on maximum velocity: (1) faster, (2) moderate, or (3) slower. One- and 4-wk workloads were calculated, along with the ACWR. Injuries were recorded if they were noncontact in nature and resulted in “time loss.” Results: Faster players demonstrated a significant overestimation of very high-speed running (HSR) when compared with their relative thresholds (P = .01; effect size = −0.73). Similarly, slower players demonstrated an underestimation of high-(P = .06; effect size = 0.55) and very-high-speed (P = .01; effect size = 1.16) running when compared with their relative thresholds. For slower players, (1) greater amounts of relative very HSR had a greater risk of injury than less (relative risk [RR] = 8.30; P = .04) and (2) greater absolute high-speed chronic workloads demonstrated an increase in injury likelihood (RR = 2.28; P = .16), whereas greater relative high-speed chronic workloads offered a decrease in injury likelihood (RR = 0.33; P = .11). Faster players with a very-high-speed ACWR of >2.0 had a greater risk of injury than those between 0.49 and 0.99 for both absolute (RR = 10.31; P = .09) and relative (RR = 4.28; P = .13) workloads. Conclusions: The individualization of velocity thresholds significantly alters the amount of very HSR performed and should be considered in the prescription of training load.

Restricted access

Nick B. Murray, Tim J. Gabbett and Andrew D. Townshend

Objectives:

To investigate the relationship between the proportion of preseason training sessions completed and load and injury during the ensuing Australian Football League season.

Design:

Single-cohort, observational study.

Methods:

Forty-six elite male Australian football players from 1 club participated. Players were divided into 3 equal groups based on the amount of preseason training completed (high [HTL], >85% sessions completed; medium [MTL], 50–85% sessions completed; and low [LTL], <50% sessions completed). Global positioning system (GPS) technology was used to record training and game loads, with all injuries recorded and classified by club medical staff. Differences between groups were analyzed using a 2-way (group × training/competition phase) repeated-measures ANOVA, along with magnitude-based inferences. Injury incidence was expressed as injuries per 1000 h.

Results:

The HTL and MTL groups completed a greater proportion of in-season training sessions (81.1% and 74.2%) and matches (76.7% and 76.1%) than the LTL (56.9% and 52.7%) group. Total distance and player load were significantly greater during the first half of the in-season period for the HTL (P = .03, ES = 0.88) and MTL (P = .02, ES = 0.93) groups than the LTL group. The relative risk of injury for the LTL group (26.8/1000 h) was 1.9 times greater than that for the HTL group (14.2/1000 h) (χ2 = 3.48, df = 2, P = .17).

Conclusions:

Completing a greater proportion of preseason training resulted in higher training loads and greater participation in training and competition during the competitive phase of the season.

Open access

Daniel Martínez-Silván, Jaime Díaz-Ocejo and Andrew Murray

Purpose:

To analyze the influence of training exposure and the utility of self-report questionnaires on predicting overuse injuries in adolescent endurance athletes.

Methods:

Five adolescent male endurance athletes (15.7 ± 1.4 y) from a full-time sports academy answered 2 questionnaires (Recovery Cue; RC-q and Oslo Sports Trauma Research questionnaire; OSTRC-q) on a weekly basis for 1 season (37 wk) to detect signs of overtraining and underrecovery (RC-q) and early symptoms of lower-limb injuries (OSTRC-q). All overuse injuries were retrospectively analyzed to detect which variations in the questionnaires in the weeks preceding injury were best associated. Overuse incidence rates were calculated based on training exposure.

Results:

Lower-limb overuse injuries accounted for 73% of total injuries. The incidence rate for overuse training-related injuries was 10 injuries/1000 h. Strong correlations were observed between individual running exposure and overuse injury incidence (r 2 = .66), number of overuse injuries (r 2 = .69), and days lost (r 2 = .66). A change of 20% or more in the RC-q score in the preceding week was associated with 67% of the lower-limb overuse injuries. Musculoskeletal symptoms were only detected in advance by the OSTRC-q in 27% of the episodes.

Conclusion:

Training exposure (especially running exposure) was shown to be related to overuse injuries, suggesting that monitoring training load is a key factor for injury prevention. Worsening scores in the RC-q (but not the OSTRC) may be an indicator of overuse injury in adolescent endurance runners when used longitudinally.

Restricted access

Andrew D. Govus, Aaron Coutts, Rob Duffield, Andrew Murray and Hugh Fullagar

Context: The relationship between pretraining subjective wellness and external and internal training load in American college football is unclear. Purpose: To examine the relationship of pretraining subjective wellness (sleep quality, muscle soreness, energy, wellness Z score) with player load and session rating of perceived exertion (s-RPE-TL) in American college football players. Methods: Subjective wellness (measured using 5-point, Likert-scale questionnaires), external load (derived from GPS and accelerometry), and s-RPE-TL were collected during 3 typical training sessions per week for the second half of an American college football season (8 wk). The relationship of pretraining subjective wellness with player load and s-RPE training load was analyzed using linear mixed models with a random intercept for athlete and a random slope for training session. Standardized mean differences (SMDs) denote the effect magnitude. Results: A 1-unit increase in wellness Z score and energy was associated with trivial 2.3% (90% confidence interval [CI] 0.5, 4.2; SMD 0.12) and 2.6% (90% CI 0.1, 5.2; SMD 0.13) increases in player load, respectively. A 1-unit increase in muscle soreness (players felt less sore) corresponded to a trivial 4.4% (90% CI −8.4, −0.3; SMD −0.05) decrease in s-RPE training load. Conclusion: Measuring pretraining subjective wellness may provide information about players’ capacity to perform in a training session and could be a key determinant of their response to the imposed training demands American college football. Hence, monitoring subjective wellness may aid in the individualization of training prescription in American college football players.