A study of a sample provides only an estimate of the true (population) value of an outcome statistic. A report of the study therefore usually includes an inference about the true value. Traditionally, a researcher makes an inference by declaring the value of the statistic statistically significant or non significant on the basis of a P value derived from a null-hypothesis test. This approach is confusing and can be misleading, depending on the magnitude of the statistic, error of measurement, and sample size. The authors use a more intuitive and practical approach based directly on uncertainty in the true value of the statistic. First they express the uncertainty as confidence limits, which define the likely range of the true value. They then deal with the real-world relevance of this uncertainty by taking into account values of the statistic that are substantial in some positive and negative sense, such as beneficial or harmful. If the likely range overlaps substantially positive and negative values, they infer that the outcome is unclear; otherwise, they infer that the true value has the magnitude of the observed value: substantially positive, trivial, or substantially negative. They refine this crude inference by stating qualitatively the likelihood that the true value will have the observed magnitude (eg, very likely beneficial). Quantitative or qualitative probabilities that the true value has the other 2 magnitudes or more finely graded magnitudes (such as trivial, small, moderate, and large) can also be estimated to guide a decision about the utility of the outcome.
You are looking at 1 - 8 of 8 items for
- Author: William G. Hopkins x
- Refine by Access: All Content x
Alan M. Batterham and William G. Hopkins
Ryan J. Hamilton, Carl D. Paton, and William G. Hopkins
In a recent study competitive road cyclists experienced substantial gains in sprint and endurance performance when sessions of high-intensity interval training were added to their usual training in the competitive phase of a season. The current study reports the effect of this type of training on performance of 20 distance runners randomized to an experimental or control group for 5 to 7 weeks of training. The experimental group replaced part of their usual competitive-phase training with 10 × 30-minute sessions consisting of 3 sets of explosive single-leg jumps (20 for each leg) alternating with 3 sets of resisted treadmill sprints (5 × 30-second efforts alternating with 30-second recovery). Before and after the training period all runners completed an incremental treadmill test for assessment of lactate threshold and maximum running speed, 2 treadmill runs to exhaustion for prediction of 800- and 1500-m times, and a 5-km outdoor time trial. Relative to the control group, the mean changes (±90% confidence limits) in the experimental group were: maximum running speed, 1.8% (± 1.1%); lactate-threshold speed, 3.5% (±3.4%); predicted 800-m speed, 3.6% (± 1.8%); predicted 1500-m speed, 3.7% (± 3.0%); and 5-km time-trial speed, 1.2% (± 1.1%). We conclude that high-intensity resistance training in the competitive phase is likely to produce beneficial gains in performance for most distance runners.
Andrea J. Braakhuis, Kelly Meredith, Gregory R. Cox, William G. Hopkins, and Louise M. Burke
A routine activity for a sports dietitian is to estimate energy and nutrient intake from an athlete’s self-reported food intake. Decisions made by the dietitian when coding a food record are a source of variability in the data. The aim of the present study was to determine the variability in estimation of the daily energy and key nutrient intakes of elite athletes, when experienced coders analyzed the same food record using the same database and software package. Seven-day food records from a dietary survey of athletes in the 1996 Australian Olympic team were randomly selected to provide 13 sets of records, each set representing the self-reported food intake of an endurance, team, weight restricted, and sprint/power athlete. Each set was coded by 3–5 members of Sports Dietitians Australia, making a total of 52 athletes, 53 dietitians, and 1456 athlete-days of data. We estimated within- and between- athlete and dietitian variances for each dietary nutrient using mixed modeling, and we combined the variances to express variability as a coefficient of variation (typical variation as a percent of the mean). Variability in the mean of 7-day estimates of a nutrient was 2- to 3-fold less than that of a single day. The variability contributed by the coder was less than the true athlete variability for a 1-day record but was of similar magnitude for a 7-day record. The most variable nutrients (e.g., vitamin C, vitamin A, cholesterol) had ~3-fold more variability than least variable nutrients (e.g., energy, carbohydrate, magnesium). These athlete and coder variabilities need to be taken into account in dietary assessment of athletes for counseling and research.
Louise M. Burke, Gary Slater, Elizabeth M. Broad, Jasmina Haukka, Sofie Modulon, and William G. Hopkins
We undertook a dietary survey of 167 Australian Olympic team athletes (80 females and 87 males) competing in endurance sports (n = 41), team sports (n = 31), sprint- or skill-based sports (n = 67), and sports in which athletes are weight-conscious (n = 28). Analysis of their 7-day food diaries provided mean energy intakes, nutrient intakes, and eating patterns. Higher energy intakes relative to body mass were reported by male athletes compared with females, and by endurance athletes compared with other athletes. Endurance athletes reported substantially higher intakes of carbohydrate (CHO) than other athletes, and were among the athletes most likely to consume CHO during and after training sessions. Athletes undertaking weight-conscious sports reported relatively low energy intakes and were least likely to consume CHO during a training session or in the first hour of recovery. On average, athletes reported eating on ~5 separate occasions each day, with a moderate relationship between the number of daily eating occasions and total energy intake. Snacks, defined as food or drink consumed between main meals, provided 23% of daily energy intake and were chosen from sources higher in CHO and lower in fat and protein than foods chosen at meals. The dietary behaviors of these elite athletes were generally consistent with guidelines for sports nutrition, but intakes during and after training sessions were often sub-optimal. Although it is of interest to study the periodicity of fluid and food intake by athletes, it is difficult to compare across studies due to a lack of standardized terminology.
Brendan H. Lazarus, William G. Hopkins, Andrew M. Stewart, and Robert J. Aughey
Effects of fixture and team characteristics on match outcome in elite Australian football were quantified using data accessed at AFLtables.com for 5109 matches for seasons 2000 to 2013. Aspects of each match included number of days’ break between matches (≤7 d vs ≥8 d), location (home vs away), travel status (travel vs no travel), and differences between opposing teams’ mean age, body mass, and height (expressed as quintiles). A logistic-regression version of the generalized mixed linear model estimated each effect, which was assessed with magnitude-based inference using 1 extra win or loss in every 10 matches as the smallest important change. For every 10 matches played, the effects were days’ break, 0.1 ± 0.3 (90% CL) wins; playing away, 1.5 ± 0.6 losses; traveling, 0.7 ± 0.6 losses; and being in the oldest, heaviest, or shortest, quintile, 1.9 ± 0.4, 1.3 ± 0.4, and 0.4 ± 0.4 wins, respectively. The effects of age and body-mass difference were not reduced substantially when adjusted for each other. All effects were clear, mostly at the 99% level. The effects of playing away, travel, and age difference were not unexpected, but the trivial effect of days’ break and the advantage of a heavier team will challenge current notions about balancing training with recovery and about team selection.
Alireza Esmaeili, Andrew M. Stewart, William G. Hopkins, George P. Elias, and Robert J. Aughey
Detrimental changes in tendon structure increase the risk of tendinopathies. The aim of this study was to investigate the influence of individual internal and external training loads and leg dominance on changes in the Achilles and patellar tendon structure.
The internal structure of the Achilles and patellar tendons of both limbs of 26 elite Australian footballers was assessed using ultrasound tissue characterization at the beginning and the end of an 18-wk preseason. Linear-regression analysis was used to estimate the effects of training load on changes in the proportion of aligned and intact tendon bundles for each side. Standardization and magnitude-based inferences were used to interpret the findings.
Possibly to very likely small increases in the proportion of aligned and intact tendon bundles occurred in the dominant Achilles (initial value 81.1%; change, ±90% confidence limits 1.6%, ±1.0%), nondominant Achilles (80.8%; 0.9%, ±1.0%), dominant patellar (75.8%; 1.5%, ±1.5%), and nondominant patellar (76.8%; 2.7%, ±1.4%) tendons. Measures of training load had inconsistent effects on changes in tendon structure; eg, there were possibly to likely small positive effects on the structure of the nondominant Achilles tendon, likely small negative effects on the dominant Achilles tendon, and predominantly no clear effects on the patellar tendons.
The small and inconsistent effects of training load are indicative of the role of recovery between tendon-overloading (training) sessions and the multivariate nature of the tendon response to load, with leg dominance a possible influencing factor.
Samuel T. Howe, Robert J. Aughey, William G. Hopkins, and Andrew M. Stewart
Purpose: Can power law models accurately predict the peak intensities of rugby competition as a function of time? Methods: Match movement data were collected from 30 elite and 30 subelite rugby union athletes across competitive seasons, using wearable Global Navigation Satellite Systems and accelerometers. Each athlete’s peak rolling mean value of each measure (mean speed, metabolic power, and PlayerLoad™) for 8 durations between 5 seconds and 10 minutes was predicted by the duration with 4 power law (log–log) models, one for forwards and backs in each half of a typical match. Results: The log of peak exercise intensity and exercise duration (5–600 s) displayed strong linear relationships (R 2 = .967–.993) across all 3 measures. Rugby backs had greater predicted intensities for shorter durations than forwards, but their intensities declined at a steeper rate as duration increased. Random prediction errors for mean speed, metabolic power, and PlayerLoad were 5% to 6%, 7% to 9%, and 8% to 10% (moderate to large), respectively, for elite players. Systematic prediction errors across the range of durations were trivial to small for elite players, underestimating intensities for shorter (5–10 s) and longer (300–600 s) durations by 2% to 4% and overestimating 20- to 120-second intensities by 2% to 3%. Random and systematic errors were slightly greater for subelites compared to elites, with ranges of 4% to 12% and 2% to 5%, respectively. Conclusions: Peak intensities of professional rugby union matches can be predicted with adequate precision (trivial to small errors) for prescribing training drills of a given duration, irrespective of playing position, match half, level of competition, or measure of exercise intensity. However, practitioners should be aware of the substantial (moderate to large) prediction errors at the level of the individual player.
Megan E. Anderson, Clinton R. Bruce, Steve F. Fraser, Nigel K. Stepto, Rudi Klein, William G. Hopkins, and John A. Hawley
Eight competitive oarswomen (age, 22 ± 3 years; mass, 64.4 ± 3.8 kg) performed three simulated 2,000-m time trials on a rowing ergometer. The trials, which were preceded by a 24-hour dietary and training control and 72 hours of caffeine abstinence, were condueted 1 hour after ingesting caffeine (6 or 9 mg kg ’ body mass) or placebo. Plasma free fatty acid concentrations before exercise were higher with caffeine than placebo (0.67 ± 0.34 vs. 0.72 ± 0.36 vs. 0.30±0.10 mM for 6 and 9 mg · kg−1; caffeine and placebo, respectively; p <.05). Performance lime improved 0.7% (95% confidence interval [Cf] 0 to 1.5%) with 6 mg kg−1 caffeine and 1.3$ (95% CI 0.5 to 2.0%) with 9 mg · kg−1 caffeine. The first 500 m of the 2,000 m was faster with the higher caffeine dose compared with placebo or the lower dose (1.53 ± 0.52 vs. 1.55 ± 0.62 and 1.56 ± 0.43 min; p = .02). We concluded that caffeine produces a worthwhile enhancement of performance in a controlled laboratory setting, primarily by improving the first 500 m of a 2,000-m row.