Almost half of the record 98 events being held at the 2014 Sochi Winter Olympic Games were either not held 20 years ago at Lillehammer or have been substantially modified. The Olympics as a global sports event are not stationary but must adapt and evolve in response to changing demands, just as the remarkable athletes who are competing do. While the Winter Olympics program has steadily grown since Chamonix in 1924, the rate of development has greatly accelerated in the last 20 years. Three factors seem to be instrumental. First, the Winter Olympics program has become more gender balanced. Female hockey teams are battling for gold, and this year women will compete in ski jumping for the first time. Most Winter Olympics sports have equal numbers of events for men and women today, although female participation still lags somewhat behind. Second, many traditional events have been modified by sport-governing bodies toward a more “TV friendly” format. Time-trial starts have been replaced by mass or group starts. “Sprint” and team events have been added to spice up traditional sports like cross-country skiing and speed skating. Finally “extreme” sports like half-pipe and ski-cross have crossed over from the X Games to the Olympics, with some arguing that the Olympics need these popular sports more than the X Games sports need the Olympics. All of these changes create new research questions for sport scientists who are also willing to adapt and evolve.
An Olympic Games is a measurable test of a nation´s sporting power. Medal counts are the object of intense scrutiny after every Olympiad. Most countries celebrate any medal with national glee, since 60% of competing countries will win none. In 2012, 10% of the competing countries won 75% of all medals. Despite this concentration among a few countries, more countries are winning more medals now than 20 years ago, thanks in part to athlete-support and -development programs arising around the globe. Small strong sporting countries like Norway are typified by fairly large variation in medal results from Olympiad to Olympiad and a high concentration of results in a few sports. These are important factors to consider when evaluating national performance and interpreting the medal count. Medal conversion, podium placements relative to top 8 placements, may provide a measure of the competitiveness of athlete-support programs in this international zero sum game where the cost of winning Olympic gold keeps rising whether measured in dollars or human capital.
Successful endurance training involves the manipulation of training intensity, duration, and frequency, with the implicit goals of maximizing performance, minimizing risk of negative training outcomes, and timing peak fitness and performances to be achieved when they matter most. Numerous descriptive studies of the training characteristics of nationally or internationally competitive endurance athletes training 10 to 13 times per week seem to converge on a typical intensity distribution in which about 80% of training sessions are performed at low intensity (2 mM blood lactate), with about 20% dominated by periods of high-intensity work, such as interval training at approx. 90% VO2max. Endurance athletes appear to self-organize toward a high-volume training approach with careful application of high-intensity training incorporated throughout the training cycle. Training intensification studies performed on already well-trained athletes do not provide any convincing evidence that a greater emphasis on high-intensity interval training in this highly trained athlete population gives long-term performance gains. The predominance of low-intensity, long-duration training, in combination with fewer, highly intensive bouts may be complementary in terms of optimizing adaptive signaling and technical mastery at an acceptable level of stress.
Stephen Seiler and Øystein Sylta
The purpose of this study was to compare physiological responses and perceived exertion among well-trained cyclists (n = 63) performing 3 different high-intensity interval-training (HIIT) prescriptions differing in work-bout duration and accumulated duration but all prescribed with maximal session effort. Subjects (male, mean ± SD 38 ± 8 y, VO2peak 62 ± 6 mL · kg–1 · min–1) completed up to 24 HIIT sessions over 12 wk as part of a training-intervention study. Sessions were prescribed as 4 × 16, 4 × 8, or 4 × 4 min with 2-min recovery periods (8 sessions of each prescription, balanced over time). Power output, HR, and RPE were collected during and after each work bout. Session RPE was reported after each session. Blood lactate samples were collected throughout the 12 wk. Physiological and perceptual responses during >1400 training sessions were analyzed. HIIT sessions were performed at 95% ± 5%, 106% ± 5%, and 117% ± 6% of 40-min time-trial power during 4 × 16-, 4 × 8-, and 4 × 4-min sessions, respectively, with peak HR in each work bout averaging 89% ± 2%, 91% ± 2%, and 94% ± 2% HRpeak. Blood lactate concentrations were 4.7 ± 1.6, 9.2 ± 2.4, and 12.7 ± 2.7 mmol/L. Despite the common prescription of maximal session effort, RPE and sRPE increased with decreasing accumulated work duration (AWD), tracking relative HR. Only 8% of 4 × 16-min sessions reached RPE 19–20, vs 61% of 4 × 4-min sessions. The authors conclude that within the HIIT duration range, performing at “maximal session effort” over a reduced AWD is associated with higher perceived exertion both acutely and postexercise. This may have important implications for HIIT prescription choices.
Stephen Seiler and Ralph Beneke
Arne Guellich and Stephen Seiler
To compare the intensity distribution during cycling training among elite track cyclists who improved or decreased in ergometer power at 4 mM blood lactate during a 15 wk training period.
51 young male German cyclists (17.4 ± 0.5 y; 30 international, 21 national junior finalists) performed cycle ergometer testing at the onset and at the end of a 15 wk basic preparation period, and reported their daily volumes of defined exercise types and intensity categories. Training organization was compared between two subgroups who improved (Responders, n = 17; ΔPLa4⋅kg-1 = +11 ± 4%) or who decreased in ergometer performance (Non-Responders, n = 17; ΔPLa4⋅kg-1 = –7 ± 6%).
Responders and Non-Responders did not differ significantly in the time invested in noncycling specific training or in the total cycling distance performed. They did differ in their cycling intensity distribution. Responders accumulated significantly more distance at low intensity (<2 mM blood lactate) while Non-Responders performed more training at near threshold intensity (3–6 mM). Cycling intensity distribution accounted for approx. 60% of the variance of changes in ergometer performance over time. Performance at t1 combined with workout intensity distribution explained over 70% of performance variance at t2.
Variation in lactate profle development is explained to a substantial degree by variation in training intensity distribution in elite cyclists. Training at <2 mM blood lactate appears to play an important role in improving the power output to blood lactate relationship. Excessive training near threshold intensity (3–6 mM blood lactate) may negatively impact lactate threshold development. Further research is required to explain the underlying adaptation mechanisms.
Thomas Haugen, Espen Tønnessen and Stephen Seiler
A review of published studies monitoring sprint performance reveals considerable variation in start distance behind the initial timing gate. The aim of the current study was to generate correction factors across varying flying-start distances used in sprint testing with photocells.
Forty-four well-trained junior soccer players (age 18.2 ± 1.0 y, height 175 ± 8 cm, body mass 68.4 ± 8.9 kg) performed sprint testing on an indoor sprint track. They were allocated to 3 groups based on sprintperformance level. Times for 10- and 200-m sprint with foot placement ranging from 0.5 to 15 m back from the initial timing gate were recorded twice for each athlete.
Correction-factor equation coefficients were generated for each of the 3 analyzed groups derived from the phase-decay equation y = (y0 − PL) × exp(−k × x) + PL, where y = time difference (0.5-m flying start as reference), x = flying-start distance, y0 is the y value when time is zero, PL (plateau) is the y value at infinite times, and k is the rate constant, expressed in reciprocal of the x-axis time units; if x is in seconds, then k is expressed in inverse seconds. R2 was ≥.998 across all athlete groups and sprint distances, demonstrating excellent goodness of fit. Within-group time differences were significant (P < .05) across all flying-start distance checkpoints for all groups. Between-groups time-saving differences up to 0.04 s were observed between the fastest and the slowest groups (P < .05).
Small changes in flying-start distances can cause time differences larger than the typical gains made from specific training, or even the difference between the fastest and slowest elite team-sport athletes. The presented correction factors should facilitate more meaningful comparisons of published sprint-performance results.
Thomas Haugen, Espen Tønnessen and Stephen Seiler
Human upper performance limits in the 100-m sprint remain the subject of much debate. The aim of this commentary is to highlight the vulnerabilities of prognoses from historical trends by shedding light on the mechanical and physiological limitations associated with human sprint performance. Several conditions work against the athlete with increasing sprint velocity; air resistance and braking impulse in each stride increase while ground-contact time typically decreases with increasing running velocity. Moreover, muscle-force production declines with increasing speed of contraction. Individual stature (leg length) strongly limits stride length such that conditioning of senior sprinters with optimized technique mainly must be targeted to enhance stride frequency. More muscle mass means more power and thereby greater ground-reaction forces in sprinting. However, as the athlete gets heavier, the energy cost of accelerating that mass also increases. This probably explains why body-mass index among world-class sprinters shows low variability and averages 23.7 ± 1.5 and 20.4 ± 1.4 for male and female sprinters, respectively. Performance development of world-class athletes indicates that ~8% improvement from the age of 18 represents the current maximum trainability of sprint performance. However, drug abuse is a huge confounding factor associated with such analyses, and available evidence suggests that we are already very close to “the citius end” of 100-m sprint performance.
Øystein Sylta, Espen Tønnessen and Stephen Seiler
The authors directly compared 3 frequently used methods of heart-rate-based training-intensity-distribution (TID) quantification in a large sample of training sessions performed by elite endurance athletes.
Twenty-nine elite cross-country skiers (16 male, 13 female; 25 ± 4 y; 70 ± 11 kg; 76 ± 7 mL · min−1 · kg−1 VO2max) conducted 570 training sessions during a ~14-d altitude-training camp. Three analysis methods were used: time in zone (TIZ), session goal (SG), and a hybrid session-goal/time-in-zone (SG/TIZ) approach. The proportion of training in zone 1, zone 2, and zone 3 was quantified using total training time or frequency of sessions, and simple conversion factors across different methods were calculated.
Comparing the TIZ and SG/TIZ methods, 96.1% and 95.5%, respectively, of total training time was spent in zone 1 (P < .001), with 2.9%/3.6% and 1.1%/0.8% in zones 2/3 (P < .001). Using SG, this corresponded to 86.6% zone 1 and 11.1%/2.4% zone 2/3 sessions. Estimated conversion factors from TIZ or SG/TIZ to SG and vice versa were 0.9/1.1, respectively, in the low-intensity training range (zone 1) and 3.0/0.33 in the high-intensity training range (zones 2 and 3).
This study provides a direct comparison and practical conversion factors across studies employing different methods of TID quantification associated with the most common heart-rate-based analysis methods.
Øystein Sylta, Espen Tønnessen and Stephen Seiler
The purpose of this study was to validate the accuracy of self-reported (SR) training duration and intensity distribution in elite endurance athletes.
Twenty-four elite cross-country skiers (25 ± 4 y, 67.9 ± 9.88 kg, 75.9 ± 6.50 mL · min−1 · kg−1) SR all training sessions during an ~14-d altitude-training camp. Heart rate (HR) and some blood lactate measurements were collected during 466 training sessions. SR training was compared with recorded training duration from HR monitors, and SR intensity distribution was compared with expert analysis (EA) of all session data.
SR training was nearly perfectly correlated with recorded training duration (r = .99), but SR training was 1.7% lower than recorded training duration (P < .001). SR training duration was also nearly perfectly correlated (r = .95) with recorded training duration >55% HRmax, but SR training was 11.4% higher than recorded training duration >55% HRmax (P < .001) due to SR inclusion of time <55% HRmax. No significant differences were observed in intensity distribution in zones 1–2 between SR and EA comparisons, but small discrepancies were found in zones 3–4 (P < .001).
This study provides evidence that elite endurance athletes report their training data accurately, although some small differences were observed due to lack of a SR “gold standard.” Daily SR training is a valid method of quantifying training duration and intensity distribution in elite endurance athletes. However, additional common reporting guidelines would further enhance accuracy.