Purpose: To establish the interunit reliability of a range of global positioning system (GPS)-derived movement indicators, to determine the variation between manufacturers, and to investigate the difference between software-derived and raw data. Methods: A range of movement variables were obtained from 27 GPS units from 3 manufacturers (GPSports EVO, 10 Hz, n = 10; STATSports Apex, 10 Hz, n = 10; and Catapult S5, 10 Hz, n = 7) that measured the same team-sport simulation session while positioned on a sled. The interunit reliability was determined using the coefficient of variation (%) and 90% confidence limits, whereas between-manufacturers comparisons and comparisons of software versus raw processed data were established using standardized effect sizes and 90% confidence limits. Results: The interunit reliability for both software and raw processed data ranged from good to poor (coefficient of variation = 0.2%; ±1.5% to 78.2%; ±1.5%), with distance, speed, and maximal speed exhibiting the best reliability. There were substantial differences between manufacturers, particularly for threshold-based acceleration and deceleration variables (effect sizes; ±90% confidence limits: −2.0; ±0.1 to 1.9; ±0.1), and there were substantial differences between data-processing methods for a range of movement indicators. Conclusions: The interunit reliability of most movement indicators was deemed as good regardless of processing method, suggesting that practitioners can have confidence within systems. Standardized data-processing methods are recommended, due to the large differences between data outputs from various manufacturer-derived software.
Search Results
You are looking at 1 - 10 of 16 items for
- Author: Heidi R. Thornton x
- Refine by Access: All Content x
Heidi R. Thornton, André R. Nelson, Jace A. Delaney, Fabio R. Serpiello, and Grant M. Duthie
Lee Taylor, Christopher J. Stevens, Heidi R. Thornton, Nick Poulos, and Bryna C.R. Chrismas
Purpose: To determine how a cooling vest worn during a warm-up could influence selected performance (countermovement jump [CMJ]), physical (global positioning system [GPS] metrics), and psychophysiological (body temperature and perceptual) variables. Methods : In a randomized, crossover design, 12 elite male World Rugby Sevens Series athletes completed an outdoor (wet bulb globe temperature 23–27°C) match-specific externally valid 30-min warm-up wearing a phase-change cooling vest (VEST) and without (CONTROL), on separate occasions 7 d apart. CMJ was assessed before and after the warm-up, with GPS indices and heart rate monitored during the warm-ups, while core temperature (T c; ingestible telemetric pill; n = 6) was recorded throughout the experimental period. Measures of thermal sensation (TS) and thermal comfort (TC) was obtained pre-warm-up and post-warm-up, with rating of perceived exertion (RPE) taken post-warm-ups. Results: Athletes in VEST had a lower ΔT c (mean [SD]: VEST = 1.3°C [0.1°C]; CONTROL = 2.0°C [0.2°C]) from pre-warm-up to post-warm-up (effect size; ±90% confidence limit: −1.54; ±0.62) and T c peak (mean [SD]: VEST = 37.8°C [0.3°C]; CONTROL = 38.5°C [0.3°C]) at the end of the warm-up (−1.59; ±0.64) compared with CONTROL. Athletes in VEST demonstrated a decrease in ΔTS (−1.59; ±0.72) and ΔTC (−1.63; ±0.73) pre-warm-up to post-warm-up, with a lower RPE post-warm-up (−1.01; ±0.46) than CONTROL. Changes in CMJ and GPS indices were trivial between conditions (effect size < 0.2). Conclusions: Wearing the vest prior to and during a warm-up can elicit favorable alterations in physiological (T c) and perceptual (TS, TC, and RPE) warm-up responses, without compromising the utilized warm-up characteristics or physical-performance measures.
Heidi R. Thornton, Jace A. Delaney, Grant M. Duthie, and Ben J. Dascombe
Purpose:
To investigate the ability of various internal and external training-load (TL) monitoring measures to predict injury incidence among positional groups in professional rugby league athletes.
Methods:
TL and injury data were collected across 3 seasons (2013–2015) from 25 players competing in National Rugby League competition. Daily TL data were included in the analysis, including session rating of perceived exertion (sRPE-TL), total distance (TD), high-speed-running distance (>5 m/s), and high-metabolic-power distance (HPD; >20 W/kg). Rolling sums were calculated, nontraining days were removed, and athletes’ corresponding injury status was marked as “available” or “unavailable.” Linear (generalized estimating equations) and nonlinear (random forest; RF) statistical methods were adopted.
Results:
Injury risk factors varied according to positional group. For adjustables, the TL variables associated most highly with injury were 7-d TD and 7-d HPD, whereas for hit-up forwards they were sRPE-TL ratio and 14-d TD. For outside backs, 21- and 28-d sRPE-TL were identified, and for wide-running forwards, sRPE-TL ratio. The individual RF models showed that the importance of the TL variables in injury incidence varied between athletes.
Conclusions:
Differences in risk factors were recognized between positional groups and individual athletes, likely due to varied physiological capacities and physical demands. Furthermore, these results suggest that robust machine-learning techniques can appropriately monitor injury risk in professional team-sport athletes.
Heidi R. Thornton, Jace A. Delaney, Grant M. Duthie, and Ben J. Dascombe
In professional team sports, the collection and analysis of athlete-monitoring data are common practice, with the aim of assessing fatigue and subsequent adaptation responses, examining performance potential, and minimizing the risk of injury and/or illness. Athlete-monitoring systems should be underpinned by appropriate data analysis and interpretation, to enable the rapid reporting of simple and scientifically valid feedback. Using the correct scientific and statistical approaches can improve the confidence of decisions made from athlete-monitoring data. However, little research has discussed and proposed an outline of the process involved in the planning, development, analysis, and interpretation of athlete-monitoring systems. This review discusses a range of methods often employed to analyze athlete-monitoring data to facilitate and inform decision-making processes. There is a wide range of analytical methods and tools that practitioners may employ in athlete-monitoring systems, as well as several factors that should be considered when collecting these data, methods of determining meaningful changes, and various data-visualization approaches. Underpinning a successful athlete-monitoring system is the ability of practitioners to communicate and present important information to coaches, ultimately resulting in enhanced athletic performance.
Heidi R. Thornton, Jace A. Delaney, Grant M. Duthie, and Ben J. Dascombe
Purpose: To investigate the influence of daily and exponentially weighted moving training loads on subsequent nighttime sleep. Methods: Sleep of 14 professional rugby league athletes competing in the National Rugby League was recorded using wristwatch actigraphy. Physical demands were quantified using GPS technology, including total distance, high-speed distance, acceleration/deceleration load (SumAccDec; AU), and session rating of perceived exertion (AU). Linear mixed models determined effects of acute (daily) and subacute (3- and 7-d) exponentially weighted moving averages (EWMA) on sleep. Results: Higher daily SumAccDec was associated with increased sleep efficiency (effect-size correlation; ES = 0.15; ±0.09) and sleep duration (ES = 0.12; ±0.09). Greater 3-d EWMA SumAccDec was associated with increased sleep efficiency (ES = 0.14; ±0.09) and an earlier bedtime (ES = 0.14; ±0.09). An increase in 7-d EWMA SumAccDec was associated with heightened sleep efficiency (ES = 0.15; ±0.09) and earlier bedtimes (ES = 0.15; ±0.09). Conclusions: The direction of the associations between training loads and sleep varied, but the strongest relationships showed that higher training loads increased various measures of sleep. Practitioners should be aware of the increased requirement for sleep during intensified training periods, using this information in the planning and implementation of training and individualized recovery modalities.
Jace A. Delaney, Heidi R. Thornton, Grant M. Duthie, and Ben J. Dascombe
Background:
Rugby league coaches adopt replacement strategies for their interchange players to maximize running intensity; however, it is important to understand the factors that may influence match performance.
Purpose:
To assess the independent factors affecting running intensity sustained by interchange players during professional rugby league.
Methods:
Global positioning system (GPS) data were collected from all interchanged players (starters and nonstarters) in a professional rugby league squad across 24 matches of a National Rugby League season. A multilevel mixed-model approach was employed to establish the effect of various technical (attacking and defensive involvements), temporal (bout duration, time in possession, etc), and situational (season phase, recovery cycle, etc) factors on the relative distance covered and average metabolic power (Pmet) during competition. Significant effects were standardized using correlation coefficients, and the likelihood of the effect was described using magnitude-based inferences.
Results:
Superior intermittent running ability resulted in very likely large increases in both relative distance and Pmet. As the length of a bout increased, both measures of running intensity exhibited a small decrease. There were at least likely small increases in running intensity for matches played after short recovery cycles and against strong opposition. During a bout, the number of collision-based involvements increased running intensity, whereas time in possession and ball time out of play decreased demands.
Conclusions:
These data demonstrate a complex interaction of individual- and match-based factors that require consideration when developing interchange strategies, and the manipulation of training loads during shorter recovery periods and against stronger opponents may be beneficial.
Heidi R. Thornton, Jace A. Delaney, Grant M. Duthie, Brendan R. Scott, William J. Chivers, Colin E. Sanctuary, and Ben J. Dascombe
Purpose:
To identify contributing factors to the incidence of illness for professional team-sport athletes, using training load (TL), self-reported illness, and well-being data.
Methods:
Thirty-two professional rugby league players (26.0 ± 4.8 y, 99.1 ± 9.6 kg, 1.84 ± 0.06 m) were recruited from the same club. Players participated in prescribed training and responded to a series of questionnaires to determine the presence of self-reported illness and markers of well-being. Internal TL was determined using the session rating of perceived exertion. These data were collected over 29 wk, across the preparatory and competition macrocycles.
Results:
The predictive models developed recognized increases in internal TL (strain values of >2282 AU, weekly TL >2786 AU, and monotony >0.78 AU) to best predict when athletes are at increased risk of self-reported illness. In addition, a reduction in overall well-being (<7.25 AU) in the presence of increased internal TL, as previously stated, was highlighted as a contributor to self-reported-illness occurrence.
Conclusions:
These results indicate that self-report data can be successfully used to provide a novel understanding of the interactions between competition-associated stressors experienced by professional team-sport athletes and their susceptibility to illness. This may help coaching staff more effectively monitor players during the season and potentially implement preventive measures to reduce the likelihood of illnesses occurring.
Farhan Juhari, Dean Ritchie, Fergus O’Connor, Nathan Pitchford, Matthew Weston, Heidi R. Thornton, and Jonathan D. Bartlett
Context: Team-sport training requires the daily manipulation of intensity, duration, and frequency, with preseason training focusing on meeting the demands of in-season competition and training on maintaining fitness. Purpose: To provide information about daily training in Australian football (AF), this study aimed to quantify session intensity, duration, and intensity distribution across different stages of an entire season. Methods: Intensity (session ratings of perceived exertion; CR-10 scale) and duration were collected from 45 professional male AF players for every training session and game. Each session’s rating of perceived exertion was categorized into a corresponding intensity zone, low (<4.0 arbitrary units), moderate (≥4.0 and <7.0), and high (≥7.0), to categorize session intensity. Linear mixed models were constructed to estimate session duration, intensity, and distribution between the 3 preseason and 4 in-season periods. Effects were assessed using linear mixed models and magnitude-based inferences. Results: The distribution of the mean session intensity across the season was 29% low intensity, 57% moderate intensity, and 14% high intensity. While 96% of games were high intensity, 44% and 49% of skills training sessions were low intensity and moderate intensity, respectively. Running had the highest proportion of high-intensity training sessions (27%). Preseason displayed higher training-session intensity (effect size [ES] = 0.29–0.91) and duration (ES = 0.33–1.44), while in-season game intensity (ES = 0.31–0.51) and duration (ES = 0.51–0.82) were higher. Conclusion s: By using a cost-effective monitoring tool, this study provides information about the intensity, duration, and intensity distribution of all training types across different phases of a season, thus allowing a greater understanding of the training and competition demands of Australian footballers.
Tannath J. Scott, Heidi R. Thornton, Macfarlane T.U. Scott, Ben J. Dascombe, and Grant M. Duthie
Purpose: To compare relative and absolute speed and metabolic thresholds for quantifying match output in elite rugby league. Methods: Twenty-six professional players competing in the National Rugby League were monitored with global positioning systems (GPS) across a rugby-league season. Absolute speed (moderate-intensity running [MIRTh > 3.6 m/s] and high-intensity running [HIRTh > 5.2 m/s]) and metabolic (>20 W/kg) thresholds were compared with individualized ventilatory (first [VT1IFT] and second [VT2IFT]) thresholds estimated from the 30-15 Intermittent Fitness Test (30-15IFT), as well as the metabolic threshold associated with VT2IFT (HPmetVT2), to examine difference in match-play demands. Results: VT2IFT mean values represent 146%, 138%, 167%, and 144% increases in the HIR dose across adjustables, edge forwards, middle forwards, and outside backs, respectively. Distance covered above VT2IFT was almost certainly greater (ES range = 0.79–1.03) than absolute thresholds across all positions. Trivial to small differences were observed between VT1IFT and MIRTh, while small to moderate differences were reported between HPmetVT2 and HPmetTh. Conclusions: These results reveal that the speed at which players begin to run at higher intensities depends on individual capacities and attributes. As such, using absolute HIR speed thresholds underestimates the physical HIR load. Moreover, absolute MIR and high metabolic thresholds may over- or underestimate the work undertaken above these thresholds depending on the respective fitness of the individual. Therefore, using relative thresholds enables better prescription and monitoring of external training loads based on measured individual physical capacities.
Heidi R. Thornton, Grant M. Duthie, Nathan W. Pitchford, Jace A. Delaney, Dean T. Benton, and Ben J. Dascombe
Purpose:
To investigate the effects of a training camp on the sleep characteristics of professional rugby league players compared with a home period.
Methods:
During a 7-d home and 13-d camp period, time in bed (TIB), total sleep time (TST), sleep efficiency (SE), and wake after sleep onset were measured using wristwatch actigraphy. Subjective wellness and training loads (TL) were also collected. Differences in sleep and TL between the 2 periods and the effect of daytime naps on nighttime sleep were examined using linear mixed models. Pearson correlations assessed the relationship of changes in TL on individuals’ TST.
Results:
During the training camp, TST (–85 min), TIB (–53 min), and SE (–8%) were reduced compared with home. Those who undertook daytime naps showed increased TIB (+33 min), TST (+30 min), and SE (+0.9%). Increases in daily total distance and training duration above individual baseline means during the training camp shared moderate (r = –.31) and trivial (r = –.04) negative relationships with TST.
Conclusions:
Sleep quality and quantity may be compromised during training camps; however, daytime naps may be beneficial for athletes due to their known benefits, without being detrimental to nighttime sleep.