With the ongoing development of microtechnology, player tracking has become one of the most important components of load monitoring in team sports. The 3 main objectives of player tracking are better understanding of practice (provide an objective, a posteriori evaluation of external load and locomotor demands of any given session or match), optimization of training-load patterns at the team level, and decision making on individual players’ training programs to improve performance and prevent injuries (eg, top-up training vs unloading sequences, return to play progression). This paper discusses the basics of a simple tracking approach and the need to integrate multiple systems. The limitations of some of the most used variables in the field (including metabolic-power measures) are debated, and innovative and potentially new powerful variables are presented. The foundations of a successful player-monitoring system are probably laid on the pitch first, in the way practitioners collect their own tracking data, given the limitations of each variable, and how they report and use all this information, rather than in the technology and the variables per se. Overall, the decision to use any tracking technology or new variable should always be considered with a cost/benefit approach (ie, cost, ease of use, portability, manpower/ability to affect the training program).
Martin Buchheit and Ben Michael Simpson
Twan ten Haaf, Selma van Staveren, Erik Oudenhoven, Maria F. Piacentini, Romain Meeusen, Bart Roelands, Leo Koenderman, Hein A.M. Daanen, Carl Foster and Jos J. de Koning
To investigate whether monitoring of easily measurable stressors and symptoms can be used to distinguish early between acute fatigue (AF) and functional overreaching (FOR).
The study included 30 subjects (11 female, 19 male; age 40.8 ± 10.8 y, VO2max 51.8 ± 6.3 mL · kg–1 · min–1) who participated in an 8-d cycling event over 1300 km with 18,500 climbing meters. Performance was measured before and after the event using a maximal incremental test. Subjects with decreased performance after the event were classified as FOR, others as AF. Mental and physical well-being, internal training load, resting heart rate, temperature, and mood were measured daily during the event. Differences between AF and FOR were analyzed using mixed-model ANOVAs. Logistic regression was used to determine the best predictors of FOR after 3 and 6 d of cycling.
Fifteen subjects were classified as FOR and 14 as AF (1 excluded). Although total group changes were observed during the event, no differences between AF and FOR were found for individual monitoring parameters. The combination of questionnaire-based changes in fatigue and readiness to train after 3 d cycling correctly predicted 78% of the subjects as AF or FOR (sensitivity = 79%, specificity = 77%).
Monitoring changes in fatigue and readiness to train, using simple visual analog scales, can be used to identify subjects likely to become FOR after only 3 d of cycling. Hence, we encourage athlete support staff to monitor not only fatigue but also the subjective integrated mental and physical readiness to perform.
Daniel Martínez-Silván, Jaime Díaz-Ocejo and Andrew Murray
To analyze the influence of training exposure and the utility of self-report questionnaires on predicting overuse injuries in adolescent endurance athletes.
Five adolescent male endurance athletes (15.7 ± 1.4 y) from a full-time sports academy answered 2 questionnaires (Recovery Cue; RC-q and Oslo Sports Trauma Research questionnaire; OSTRC-q) on a weekly basis for 1 season (37 wk) to detect signs of overtraining and underrecovery (RC-q) and early symptoms of lower-limb injuries (OSTRC-q). All overuse injuries were retrospectively analyzed to detect which variations in the questionnaires in the weeks preceding injury were best associated. Overuse incidence rates were calculated based on training exposure.
Lower-limb overuse injuries accounted for 73% of total injuries. The incidence rate for overuse training-related injuries was 10 injuries/1000 h. Strong correlations were observed between individual running exposure and overuse injury incidence (r 2 = .66), number of overuse injuries (r 2 = .69), and days lost (r 2 = .66). A change of 20% or more in the RC-q score in the preceding week was associated with 67% of the lower-limb overuse injuries. Musculoskeletal symptoms were only detected in advance by the OSTRC-q in 27% of the episodes.
Training exposure (especially running exposure) was shown to be related to overuse injuries, suggesting that monitoring training load is a key factor for injury prevention. Worsening scores in the RC-q (but not the OSTRC) may be an indicator of overuse injury in adolescent endurance runners when used longitudinally.
Training quantification is basic to evaluate an endurance athlete’s responses to training loads, ensure adequate stress/recovery balance, and determine the relationship between training and performance. Quantifying both external and internal workload is important, because external workload does not measure the biological stress imposed by the exercise sessions. Generally used quantification methods include retrospective questionnaires, diaries, direct observation, and physiological monitoring, often based on the measurement of oxygen uptake, heart rate, and blood lactate concentration. Other methods in use in endurance sports include speed measurement and the measurement of power output, made possible by recent technological advances such as power meters in cycling and triathlon. Among subjective methods of quantification, rating of perceived exertion stands out because of its wide use. Concurrent assessments of the various quantification methods allow researchers and practitioners to evaluate stress/recovery balance, adjust individual training programs, and determine the relationships between external load, internal load, and athletes’ performance. This brief review summarizes the most relevant external- and internal-workload-quantification methods in endurance sports and provides practical examples of their implementation to adjust the training programs of elite athletes in accordance with their individualized stress/recovery balance.
Samuel Robertson, Jonathan D. Bartlett and Paul B. Gastin
Decision-support systems are used in team sport for a variety of purposes including evaluating individual performance and informing athlete selection. A particularly common form of decision support is the traffic-light system, where color coding is used to indicate a given status of an athlete with respect to performance or training availability. However, despite relatively widespread use, there remains a lack of standardization with respect to how traffic-light systems are operationalized. This paper addresses a range of pertinent issues for practitioners relating to the practice of traffic-light monitoring in team sports. Specifically, the types and formats of data incorporated in such systems are discussed, along with the various analysis approaches available. Considerations relating to the visualization and communication of results to key stakeholders in the team-sport environment are also presented. In order for the efficacy of traffic-light systems to be improved, future iterations should look to incorporate the recommendations made here.
Darren J. Burgess
Research describing load-monitoring techniques for team sport is plentiful. Much of this research is conducted retrospectively and typically involves recreational or semielite teams. Load-monitoring research conducted on professional team sports is largely observational. Challenges exist for the practitioner in implementing peer-reviewed research into the applied setting. These challenges include match scheduling, player adherence, manager/coach buy-in, sport traditions, and staff availability. External-load monitoring often attracts questions surrounding technology reliability and validity, while internal-load monitoring makes some assumptions about player adherence, as well as having some uncertainty around the impact these measures have on player performance This commentary outlines examples of load-monitoring research, discusses the issues associated with the application of this research in an elite team-sport setting, and suggests practical adjustments to the existing research where necessary.
Tim J. Gabbett and Rod Whiteley
The authors have observed that in professional sporting organizations the staff responsible for physical preparation and medical care typically practice in relative isolation and display tension as regards their attitudes toward training-load prescription (much more and much less training, respectively). Recent evidence shows that relatively high chronic training loads, when they are appropriately reached, are associated with reduced injury risk and better performance. Understanding this link between performance and training loads removes this tension but requires a better understanding of the relationship between the acute:chronic workload ratio (ACWR) and its association with performance and injury. However, there remain many questions in the area of ACWR, and we are likely at an early stage of our understanding of these parameters and their interrelationships. This opinion paper explores these themes and makes recommendations for improving performance through better synergies in support-staff approaches. Furthermore, aspects of the ACWR that remain to be clarified—the role of shared decision making, risk:benefit estimation, and clearer accountability—are discussed.
James J. Malone, Ric Lovell, Matthew C. Varley and Aaron J. Coutts
Athlete-tracking devices that include global positioning system (GPS) and microelectrical mechanical system (MEMS) components are now commonplace in sport research and practice. These devices provide large amounts of data that are used to inform decision making on athlete training and performance. However, the data obtained from these devices are often provided without clear explanation of how these metrics are obtained. At present, there is no clear consensus regarding how these data should be handled and reported in a sport context. Therefore, the aim of this review was to examine the factors that affect the data produced by these athlete-tracking devices and to provide guidelines for collecting, processing, and reporting of data. Many factors including device sampling rate, positioning and fitting of devices, satellite signal, and data-filtering methods can affect the measures obtained from GPS and MEMS devices. Therefore researchers are encouraged to report device brand/model, sampling frequency, number of satellites, horizontal dilution of precision, and software/firmware versions in any published research. In addition, details of inclusion/exclusion criteria for data obtained from these devices are also recommended. Considerations for the application of speed zones to evaluate the magnitude and distribution of different locomotor activities recorded by GPS are also presented, alongside recommendations for both industry practice and future research directions. Through a standard approach to data collection and procedure reporting, researchers and practitioners will be able to make more confident comparisons from their data, which will improve the understanding and impact these devices can have on athlete performance.
Marco Cardinale and Matthew C. Varley
The need to quantify aspects of training to improve training prescription has been the holy grail of sport scientists and coaches for many years. Recently, there has been an increase in scientific interest, possibly due to technological advancements and better equipment to quantify training activities. Over the last few years there has been an increase in the number of studies assessing training load in various athletic cohorts with a bias toward subjective reports and/or quantifications of external load. There is an evident lack of extensive longitudinal studies employing objective internal-load measurements, possibly due to the cost-effectiveness and invasiveness of measures necessary to quantify objective internal loads. Advances in technology might help in developing better wearable tools able to ease the difficulties and costs associated with conducting longitudinal observational studies in athletic cohorts and possibly provide better information on the biological implications of specific external-load patterns. Considering the recent technological developments for monitoring training load and the extensive use of various tools for research and applied work, the aim of this work was to review applications, challenges, and opportunities of various wearable technologies.
Alan J. Metcalfe, Paolo Menaspà, Vincent Villerius, Marc Quod, Jeremiah J. Peiffer, Andrew D. Govus and Chris R Abbiss
To describe the within-season external workloads of professional male road cyclists for optimal training prescription.
Training and racing of 4 international competitive professional male cyclists (age 24 ± 2 y, body mass 77.6 ± 1.5 kg) were monitored for 12 mo before the world team-time-trial championships. Three within-season phases leading up to the team-time-trial world championships on September 20, 2015, were defined as phase 1 (Oct–Jan), phase 2 (Feb–May), and phase 3 (June–Sept). Distance and time were compared between training and racing days and over each of the various phases. Times spent in absolute (<100, 100–300, 400–500, >500 W) and relative (0–1.9, 2.0–4.9, 5.0–7.9, >8 W/kg) power zones were also compared for the whole season and between phases 1–3.
Total distance (3859 ± 959 vs 10911 ± 620 km) and time (240.5 ± 37.5 vs 337.5 ± 26 h) were lower (P < .01) in phase 1 than phase 2. Total distance decreased (P < .01) from phase 2 to phase 3 (10911 ± 620 vs 8411 ± 1399 km, respectively). Mean absolute (236 ± 12.1 vs 197 ± 3 W) and relative (3.1 ± 0 vs 2.5 ± 0 W/kg) power output were higher (P < .05) during racing than training, respectively.
Volume and intensity differed between training and racing over each of 3 distinct within-season phases.