Decision-support systems are used in team sport for a variety of purposes including evaluating individual performance and informing athlete selection. A particularly common form of decision support is the traffic-light system, where color coding is used to indicate a given status of an athlete with respect to performance or training availability. However, despite relatively widespread use, there remains a lack of standardization with respect to how traffic-light systems are operationalized. This paper addresses a range of pertinent issues for practitioners relating to the practice of traffic-light monitoring in team sports. Specifically, the types and formats of data incorporated in such systems are discussed, along with the various analysis approaches available. Considerations relating to the visualization and communication of results to key stakeholders in the team-sport environment are also presented. In order for the efficacy of traffic-light systems to be improved, future iterations should look to incorporate the recommendations made here.
Samuel Robertson, Jonathan D. Bartlett, and Paul B. Gastin
Darren J. Burgess
Research describing load-monitoring techniques for team sport is plentiful. Much of this research is conducted retrospectively and typically involves recreational or semielite teams. Load-monitoring research conducted on professional team sports is largely observational. Challenges exist for the practitioner in implementing peer-reviewed research into the applied setting. These challenges include match scheduling, player adherence, manager/coach buy-in, sport traditions, and staff availability. External-load monitoring often attracts questions surrounding technology reliability and validity, while internal-load monitoring makes some assumptions about player adherence, as well as having some uncertainty around the impact these measures have on player performance This commentary outlines examples of load-monitoring research, discusses the issues associated with the application of this research in an elite team-sport setting, and suggests practical adjustments to the existing research where necessary.
Tim J. Gabbett and Rod Whiteley
The authors have observed that in professional sporting organizations the staff responsible for physical preparation and medical care typically practice in relative isolation and display tension as regards their attitudes toward training-load prescription (much more and much less training, respectively). Recent evidence shows that relatively high chronic training loads, when they are appropriately reached, are associated with reduced injury risk and better performance. Understanding this link between performance and training loads removes this tension but requires a better understanding of the relationship between the acute:chronic workload ratio (ACWR) and its association with performance and injury. However, there remain many questions in the area of ACWR, and we are likely at an early stage of our understanding of these parameters and their interrelationships. This opinion paper explores these themes and makes recommendations for improving performance through better synergies in support-staff approaches. Furthermore, aspects of the ACWR that remain to be clarified—the role of shared decision making, risk:benefit estimation, and clearer accountability—are discussed.
James J. Malone, Ric Lovell, Matthew C. Varley, and Aaron J. Coutts
Athlete-tracking devices that include global positioning system (GPS) and microelectrical mechanical system (MEMS) components are now commonplace in sport research and practice. These devices provide large amounts of data that are used to inform decision making on athlete training and performance. However, the data obtained from these devices are often provided without clear explanation of how these metrics are obtained. At present, there is no clear consensus regarding how these data should be handled and reported in a sport context. Therefore, the aim of this review was to examine the factors that affect the data produced by these athlete-tracking devices and to provide guidelines for collecting, processing, and reporting of data. Many factors including device sampling rate, positioning and fitting of devices, satellite signal, and data-filtering methods can affect the measures obtained from GPS and MEMS devices. Therefore researchers are encouraged to report device brand/model, sampling frequency, number of satellites, horizontal dilution of precision, and software/firmware versions in any published research. In addition, details of inclusion/exclusion criteria for data obtained from these devices are also recommended. Considerations for the application of speed zones to evaluate the magnitude and distribution of different locomotor activities recorded by GPS are also presented, alongside recommendations for both industry practice and future research directions. Through a standard approach to data collection and procedure reporting, researchers and practitioners will be able to make more confident comparisons from their data, which will improve the understanding and impact these devices can have on athlete performance.
Marco Cardinale and Matthew C. Varley
The need to quantify aspects of training to improve training prescription has been the holy grail of sport scientists and coaches for many years. Recently, there has been an increase in scientific interest, possibly due to technological advancements and better equipment to quantify training activities. Over the last few years there has been an increase in the number of studies assessing training load in various athletic cohorts with a bias toward subjective reports and/or quantifications of external load. There is an evident lack of extensive longitudinal studies employing objective internal-load measurements, possibly due to the cost-effectiveness and invasiveness of measures necessary to quantify objective internal loads. Advances in technology might help in developing better wearable tools able to ease the difficulties and costs associated with conducting longitudinal observational studies in athletic cohorts and possibly provide better information on the biological implications of specific external-load patterns. Considering the recent technological developments for monitoring training load and the extensive use of various tools for research and applied work, the aim of this work was to review applications, challenges, and opportunities of various wearable technologies.
Alan J. Metcalfe, Paolo Menaspà, Vincent Villerius, Marc Quod, Jeremiah J. Peiffer, Andrew D. Govus, and Chris R Abbiss
To describe the within-season external workloads of professional male road cyclists for optimal training prescription.
Training and racing of 4 international competitive professional male cyclists (age 24 ± 2 y, body mass 77.6 ± 1.5 kg) were monitored for 12 mo before the world team-time-trial championships. Three within-season phases leading up to the team-time-trial world championships on September 20, 2015, were defined as phase 1 (Oct–Jan), phase 2 (Feb–May), and phase 3 (June–Sept). Distance and time were compared between training and racing days and over each of the various phases. Times spent in absolute (<100, 100–300, 400–500, >500 W) and relative (0–1.9, 2.0–4.9, 5.0–7.9, >8 W/kg) power zones were also compared for the whole season and between phases 1–3.
Total distance (3859 ± 959 vs 10911 ± 620 km) and time (240.5 ± 37.5 vs 337.5 ± 26 h) were lower (P < .01) in phase 1 than phase 2. Total distance decreased (P < .01) from phase 2 to phase 3 (10911 ± 620 vs 8411 ± 1399 km, respectively). Mean absolute (236 ± 12.1 vs 197 ± 3 W) and relative (3.1 ± 0 vs 2.5 ± 0 W/kg) power output were higher (P < .05) during racing than training, respectively.
Volume and intensity differed between training and racing over each of 3 distinct within-season phases.
Samuel N. Cheuvront, Robert W. Kenefick, and Edward J. Zambraski
A common practice in sports science is to assess hydration status using the concentration of a single spot urine collection taken at any time of day for comparison against concentration (specific gravity, osmolality, color) thresholds established from first morning voids. There is strong evidence that this practice can be confounded by fluid intake, diet, and exercise, among other factors, leading to false positive/negative assessments. Thus, the purpose of this paper is to provide a simple explanation as to why this practice leads to erroneous conclusions and should be curtailed in favor of consensus hydration assessment recommendations.
Alisa Nana, Gary J. Slater, Arthur D. Stewart, and Louise M. Burke
Dual energy X-ray absorptiometry (DXA) is rapidly becoming more accessible and popular as a technique to monitor body composition, especially in athletic populations. Although studies in sedentary populations have investigated the validity of DXA assessment of body composition, few studies have examined the issues of reliability in athletic populations and most studies which involve DXA measurements of body composition provide little information on their scanning protocols. This review presents a summary of the sources of error and variability in the measurement of body composition by DXA, and develops a theoretical model of best practice to standardize the conduct and analysis of a DXA scan. Components of this protocol include standardization of subject presentation (subjects rested, overnight-fasted and in minimal clothing) and positioning on the scanning bed (centrally aligned in a standard position using custom-made positioning aids) as well as manipulation of the automatic segmentation of regional areas of the scan results. Body composition assessment implemented with such protocol ensures a high level of precision, while still being practical in an athletic setting. This ensures that any small changes in body composition are confidently detected and correctly interpreted. The reporting requirements for studies involving DXA scans of body composition include details of the DXA machine and software, subject presentation and positioning protocols, and analysis protocols.
Gareth A. Wallis and Anna Wittekind
The consumption of carbohydrate before, during, and after exercise is a central feature of the athlete’s diet, particularly those competing in endurance sports. Sucrose is a carbohydrate present within the diets of athletes. Whether sucrose, by virtue of its component monosaccharides glucose and fructose, exerts a meaningful advantage for athletes over other carbohydrate types or blends is unclear. This narrative reviews the literature on the influence of sucrose, relative to other carbohydrate types, on exercise performance or the metabolic factors that may underpin exercise performance. Inference from the research to date suggests that sucrose appears to be as effective as other highly metabolizable carbohydrates (e.g., glucose, glucose polymers) in providing an exogenous fuel source during endurance exercise, stimulating the synthesis of liver and muscle glycogen during exercise recovery and improving endurance exercise performance. Nonetheless, gaps exist in our understanding of the metabolic and performance consequences of sucrose ingestion before, during, and after exercise relative to other carbohydrate types or blends, particularly when more aggressive carbohydrate intake strategies are adopted. While further research is recommended and discussed in this review, based on the currently available scientific literature it would seem that sucrose should continue to be regarded as one of a variety of options available to help athletes achieve their specific carbohydrate-intake goals.