Katie Slattery, Stephen Crowcroft, and Aaron J. Coutts
Stephen Crowcroft, Erin McCleave, Katie Slattery, and Aaron J. Coutts
To assess measurement sensitivity and diagnostic characteristics of athlete-monitoring tools to identify performance change.
Fourteen nationally competitive swimmers (11 male, 3 female; age 21.2 ± 3.2 y) recorded daily monitoring over 15 mo. The self-report group (n = 7) reported general health, energy levels, motivation, stress, recovery, soreness, and wellness. The combined group (n = 7) recorded sleep quality, perceived fatigue, total quality recovery (TQR), and heart-rate variability. The week-to-week change in mean weekly values was presented as coefficient of variance (CV%). Reliability was assessed on 3 occasions and expressed as the typical error CV%. Week-to-week change was divided by the reliability of each measure to calculate the signal-to-noise ratio. The diagnostic characteristics for both groups were assessed with receiver-operating-curve analysis, where area under the curve (AUC), Youden index, sensitivity, and specificity of measures were reported. A minimum AUC of .70 and lower confidence interval (CI) >.50 classified a “good” diagnostic tool to assess performance change.
Week-to-week variability was greater than reliability for soreness (3.1), general health (3.0), wellness% (2.0), motivation (1.6), sleep (2.6), TQR (1.8), fatigue (1.4), R-R interval (2.5), and LnRMSSD:RR (1.3). Only general health was a “good” diagnostic tool to assess decreased performance (AUC –.70, 95% CI, .61–.80).
Many monitoring variables are sensitive to changes in fitness and fatigue. However, no single monitoring variable could discriminate performance change. As such the use of a multidimensional system that may be able to better account for variations in fitness and fatigue should be considered.
Stephen Crowcroft, Katie Slattery, Erin McCleave, and Aaron J. Coutts
Purpose: To assess a coach’s subjective assessment of their athletes’ performances and whether the use of athlete-monitoring tools could improve on the coach’s prediction to identify performance changes. Methods: Eight highly trained swimmers (7 male and 1 female, age 21.6 [2.0] y) recorded perceived fatigue, total quality recovery, and heart-rate variability over a 9-month period. Prior to each race of the swimmers’ main 2 events, the coach (n = 1) was presented with their previous race results and asked to predict their race time. All race results (n = 93) with aligning coach’s predictions were recorded and classified as a dichotomous outcome (0 = no change; 1 = performance decrement or improvement [change +/− > or < smallest meaningful change]). A generalized estimating equation was used to assess the coach’s accuracy and the contribution of monitoring variables to the model fit. The probability from generalized estimating equation models was assessed with receiver operating characteristic curves to identify the model’s accuracy from the area under the curve analysis. Results: The coach’s predictions had the highest diagnostic accuracy to identify both decrements (area under the curve: 0.93; 95% confidence interval, 0.88–0.99) and improvements (area under the curve: 0.89; 95% confidence interval, 0.83–0.96) in performance. Conclusions: These findings highlight the high accuracy of a coach’s subjective assessment of performance. Furthermore, the findings provide a future benchmark for athlete-monitoring systems to be able to improve on a coach’s existing understanding of swimming performance.
Stephanie J. Shell, Brad Clark, James R. Broatch, Katie Slattery, Shona L. Halson, and Aaron J. Coutts
Purpose: This study aimed to independently validate a wearable inertial sensor designed to monitor training and performance metrics in swimmers. Methods: A total of 4 male (21  y, 1 national and 3 international) and 6 female (22  y, 1 national and 5 international) swimmers completed 15 training sessions in an outdoor 50-m pool. Swimmers were fitted with a wearable device (TritonWear, 9-axis inertial measurement unit with triaxial accelerometer, gyroscope, and magnetometer), placed under the swim cap on top of the occipital protuberance. Video footage was captured for each session to establish criterion values. Absolute error, standardized effect, and Pearson correlation coefficient were used to determine the validity of the wearable device against video footage for total swim distance, total stroke count, mean stroke count, and mean velocity. A Fisher exact test was used to analyze the accuracy of stroke-type identification. Results: Total swim distance was underestimated by the device relative to video analysis. Absolute error was consistently higher for total and mean stroke count, and mean velocity, relative to video analysis. Across all sessions, the device incorrectly detected total time spent in backstroke, breaststroke, butterfly, and freestyle by 51% (15%). The device did not detect time spent in drill. Intraclass correlation coefficient results demonstrated excellent intrarater reliability between repeated measures across all swimming metrics. Conclusions: The wearable device investigated in this study does not accurately measure distance, stroke count, and velocity swimming metrics or detect stroke type. Its use as a training monitoring tool in swimming is limited.
Erin L. McCleave, Katie M. Slattery, Rob Duffield, Stephen Crowcroft, Chris R. Abbiss, Lee K. Wallace, and Aaron J. Coutts
Purpose: To examine whether concurrent heat and intermittent hypoxic training can improve endurance performance and physiological responses relative to independent heat or temperate interval training. Methods: Well-trained male cyclists (N = 29) completed 3 weeks of moderate- to high-intensity interval training (4 × 60 min·wk−1) in 1 of 3 conditions: (1) heat (HOT: 32°C, 50% relative humidity, 20.8% fraction of inspired oxygen, (2) heat + hypoxia (H+H: 32°C, 50% relative humidity, 16.2% fraction of inspired oxygen), or (3) temperate environment (CONT: 22°C, 50% relative humidity, 20.8% fraction of inspired oxygen). Performance 20-km time trials (TTs) were conducted in both temperate (TTtemperate) and assigned condition (TTenvironment) before (base), immediately after (mid), and after a 3-week taper (end). Measures of hemoglobin mass, plasma volume, and blood volume were also assessed. Results: There was improved 20-km TT performance to a similar extent across all groups in both TTtemperate (mean ±90% confidence interval HOT, −2.8% ±1.8%; H+H, −2.0% ±1.5%; CONT, −2.0% ±1.8%) and TTenvironment (HOT, −3.3% ±1.7%; H+H, −3.1% ±1.6%; CONT, −3.2% ±1.1%). Plasma volume (HOT, 3.8% ±4.7%; H+H, 3.3% ±4.7%) and blood volume (HOT, 3.0% ±4.1%; H+H, 4.6% ±3.9%) were both increased at mid in HOT and H+H over CONT. Increased hemoglobin mass was observed in H+H only (3.0% ±1.8%). Conclusion: Three weeks of interval training in heat, concurrent heat and hypoxia, or temperate environments improve 20-km TT performance to the same extent. Despite indications of physiological adaptations, the addition of independent heat or concurrent heat and hypoxia provided no greater performance benefits in a temperate environment than temperate training alone.
Erin L. McCleave, Katie M. Slattery, Rob Duffield, Philo U. Saunders, Avish P. Sharma, Stephen Crowcroft, and Aaron J. Coutts
Purpose: To determine whether combining training in heat with “Live High, Train Low” hypoxia (LHTL) further improves thermoregulatory and cardiovascular responses to a heat-tolerance test compared with independent heat training. Methods: A total of 25 trained runners (peak oxygen uptake = 64.1 [8.0] mL·min−1·kg−1) completed 3-wk training in 1 of 3 conditions: (1) heat training combined with “LHTL” hypoxia (H+H; FiO2 = 14.4% [3000 m], 13 h·d−1; train at <600 m, 33°C, 55% relative humidity [RH]), (2) heat training (HOT; live and train <600 m, 33°C, 55% RH), and (3) temperate training (CONT; live and train <600 m, 13°C, 55% RH). Heat adaptations were determined from a 45-min heat-response test (33°C, 55% RH, 65% velocity corresponding to the peak oxygen uptake) at baseline and immediately and 1 and 3 wk postexposure (baseline, post, 1 wkP, and 3 wkP, respectively). Core temperature, heart rate, sweat rate, sodium concentration, plasma volume, and perceptual responses were analyzed using magnitude-based inferences. Results: Submaximal heart rate (effect size [ES] = −0.60 [−0.89; −0.32]) and core temperature (ES = −0.55 [−0.99; −0.10]) were reduced in HOT until 1 wkP. Sweat rate (ES = 0.36 [0.12; 0.59]) and sweat sodium concentration (ES = −0.82 [−1.48; −0.16]) were, respectively, increased and decreased until 3 wkP in HOT. Submaximal heart rate (ES = −0.38 [−0.85; 0.08]) was likely reduced in H+H at 3 wkP, whereas CONT had unclear physiological changes. Perceived exertion and thermal sensation were reduced across all groups. Conclusions: Despite greater physiological stress from combined heat training and “LHTL” hypoxia, thermoregulatory adaptations are limited in comparison with independent heat training. The combined stimuli provide no additional physiological benefit during exercise in hot environments.