With recent technological and methodological innovation, complicated biomechanical and physiological testing protocols once restricted to well-funded research laboratories are now feasible in field conditions for athlete testing, training, rehabilitation, and applied research. Importantly, although accessibility has markedly increased in parallel to perceivable drops in assessment complexity, the intricacy of underlying physiological, neuromuscular, or biomechanical variables has remained constant across field and laboratory methods. Unfortunately, distinguishing measurement reliability from the relevance of the overarching concept can be difficult when poor reliability of “output” variables is reported in research conclusions and potentially ascribed to the physiological/biomechanical concepts or computation methods themselves. The practical value of various measurements and implementations are certainly owed attention and critique within research; however, imprecise analyses and subsequent narrative can complicate consensus on their true utility. A more balanced approach should include critical examination of all sources of error, notably, input data collection and the rigor of associated procedures, device (in)accuracy, and the variability of the athletes’ effort tested (biological error).
Although this phenomenon exists in every kind of physiological or biomechanical assessment, this commentary focuses on the reliability of force-velocity-power (FvP) relationship in jumping. This is a pertinent focus given the increasing use of the century-old force–velocity (Fv) relationship concept and associated variables in research1 and the fact that 3 recent studies have challenged the reliability and subsequent utility of FvP profiling.2–4 Unfortunately, these studies give little to no consideration in their conclusions to an obvious source for their insufficient low measurement reliability or procedures standardization. Are FvP relationships and associated individual “profile” concepts fundamentally flawed and the associated testing methods unreliable (as often concluded in these studies), or is it also (and to what extent) an issue with the input data measurements?
This commentary paper discusses the distinction between measurement reliability, model or method validity, and concept relevance using the FvP profile in jumping as an illustration. The narrative is based on published results and accessible data sets examining reliability of jumping FvP profiles (both inputs and outputs) and on theoretical simulations that estimate the random error of output variables induced by increasing the variability of input measurements.
Reliability of FvP Profile Variables: Experimental Results
First, what is commonly termed the “FvP profile,” “FvP profiling,” or “FvP variables” and refers to a test of strength qualities represents, in fact, an extraction of indices (eg, maximal theoretical force [F0] and velocity [v0], maximal power output [Pmax], slope of the Fv relationship [SFv]) from fundamental principles of muscle physiology. The Fv relationship describes the force production capacities of the neuromuscular system per contraction/movement velocity.1,5–7 This relationship has been observed for almost a century on in vitro isolated muscles1 and in vivo single-joint (eg, knee extension6), multijoint (eg, cycling,7 jumping,8 leg pressing9 and bench pressing10), and whole-body (eg, rowing11 and sprinting12) movements. Although the methods or models used to determine this relationship are open to discussion, their physiological bases are robust. In any case, several research groups have reported mostly acceptable to good validity or reliability of the main jumping FvP relationship outputs across different methods, including “gold standard” force plates and dynamics principles8,13–17 and computation-based field methods using inverse dynamics approaches14–16,18,19 (Tables 1 and 2).
Methods and Variability of Force–Velocity Profile “Input” Variables in Previous Studies
Input variable | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Methods | hpo | Unloaded/loaded jump height | |||||||||
Device | Reliability | Measurement | Jump type | Range of loads | SEM, cm | CV | ICC | SEM, cm | CV | ICC | |
Cuk et al8 | Pully device | Interday | Force plate | SJ | −30% to + 30% BM | — | — | — | — | — | — |
CMJ | — | — | — | — | — | — | |||||
Garcia-Ramos et al13 | Free weights | Interday | Force plate | SJ | 0 to 75 kg | — | — | — | — | — | — |
Smith machine | — | — | — | — | — | — | |||||
Free weights | CMJ | — | — | — | — | — | — | ||||
Smith machine | — | — | — | — | — | — | |||||
Jimenez-Reyes et al16 | Free weights | Intraday | Force plate | CMJ | — | — | — | — | — | — | — |
Computation-based method | — | ||||||||||
Janicijevic et al15 | Free weights | Intraday | Force plate | SJ | 0.5 to ~61 (±12) kg (corresponding to an ~10-cm jump) | — | — | — | — | — | — |
— | — | — | — | — | — | ||||||
Free weights | Intraday | Computation-based method | — | — | — | — | — | — | |||
— | — | — | — | — | — | ||||||
Fessl et al19 | Free weights | Interday | Computation-based method | SJa SJb | 0% to 80% BM | — | — | — | — | ∼2.1% to 4.4% | ∼.97 to .99 |
— | — | — | — | ∼3.0% to 8.3% | ∼.77 to .93 | ||||||
Valenzuela et al3 | Free weights | Interday | Computation-based method | SJ | 0% to 70% BM | ± ∼1 to 3 | — | — | ∼0.65 to 2.42 | ∼3.8% to 7.8% | ∼.69 to .96 |
Smith machine | — | — | ∼0.71 to 2.42 | ∼4.4% to 7.8% | ∼.85 to .96 | ||||||
Lindberg et al2 | Free weights | Interday | Force plate | SJ | 0.5 to 80 kg (or 80% BM) | ∼2 to 4 | 5% to 10% | — | 1.20 | 6.8% | — |
Free weights | CMJ | — | |||||||||
Kotani et al4 | Free weights | Interday | Force plate | SJ | 0% to 100% BM | — | — | — | ∼3.5 | ∼10% | — |
Abbreviations: BM, body mass; CMJ, countermovement jump; CV, coefficient of variation derived from SEM (in % of mean values); hpo, range of motion; ICC, intraclass coefficient; SEM, standard error of measurement (in raw units); SJ, squat jump.
aTask-experienced participants. bTask-inexperienced participants.
Force–Velocity Relationship Quality and Reliability of “Output” Variables in Previous Studies
FV relationship, r2 | Pmax | F0 | v0 | SFv | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Median/mean (SD)/range | SEM | CV, % | ICC | SEM | CV, % | ICC | SEM, m/s | CV, % | ICC | SEM, N·s/m | CV, % | ICC | ||
Cuk et al8 | Force plate—SJ | 0.919 | 86 W | 5.4 | .93 | 127 N | 5 | .95 | 0.55 | 6.0 | .93 | 82 | 9.8 | .96 |
Force plate—CMJ | 0.987 | 59 W | 2.4 | .98 | 80 N | 3 | .98 | 0.11 | 3.3 | .96 | 52 | 5.7 | .98 | |
Garcia-Ramos et al13 | Free weights—SJ | 0.988 | 54 W | 3.8 | .93 | 164 N | 6.7 | .82 | 0.16 | 6.4 | .84 | 135 | 12.6 | .81 |
Smith machine—SJ | 0.960 | 60 W | 4.2 | .91 | 211 N | 7.6 | .75 | 0.16 | 7.5 | .85 | 192 | 14.0 | .82 | |
Free weights—CMJ | 0.996 | 51 W | 2.4 | .97 | 83 N | 3.4 | .88 | 0.17 | 4.9 | .81 | 57 | 8.2 | .69 | |
Smith machine—CMJ | 0.994 | 95 W | 4.5 | .90 | 99 N | 3.9 | .88 | 0.22 | 6.5 | .79 | 78 | 9.9 | .79 | |
Jimenez-Reyes et al16 | Force plate—CMJ | — | — | — | — | — | — | — | — | — | — | — | — | — |
Computation method—CMJ | — | — | 5.5 | .98 | — | 1.2 | .99 | — | 7.6 | .98 | — | 4.8 | .99 | |
Janicijevic et al15 | Force plate—90° knee | 0.976 | 52.9 W | 3.8 | .96 | 89.9 N | 3.7 | .95 | 0.161 | 7.0 | .85 | 103 | 9.3 | .90 |
Force plate—pref. knee angle | 0.986 | 66.5 W | 4.2 | .96 | 148.7 N | 5.7 | .88 | 0.242 | 9.9 | .68 | 162 | 14.7 | .69 | |
Computation method—90° knee | 0.994 | 52.3 W | 3.3 | .96 | 71 N | 2.9 | .96 | 0.156 | 6.1 | .77 | 82 | 8.4 | .86 | |
Computation method—pref. knee angle | 0.990 | 52.8 W | 3.2 | .97 | 110 N | 4.3 | .93 | 0.177 | 7.0 | .80 | 112 | 10.7 | .79 | |
Fessl et al19 | Computation method—90° knee | ≥0.95a≥0.95b | — | 3.1 | .98 | 1.9 | .91 | 4.0 | .95 | — | 6.2 | .86 | ||
— | 5.1 | .78 | 3.9 | .89 | 7.9 | .70 | — | 10.9 | .83 | |||||
Valenzuela et al3 | Free weights—SJ | 0.96 (0.04) | 7.75 W/kg | 30.0 | .38 | 3.0 W/kg | 9.9 | .04 | 1.36 | 34.5 | .07 | — | 42.1 | −.30 |
Smith machine—SJ | 0.97 (0.03) | 3.47 W/kg | 11.0 | .75 | 1.0 N/kg | 3.4 | .95 | 0.57 | 12.6 | .77 | — | 12.1 | .84 | |
Lindberg et al2 | Force plate—SJ | 0.95–1.00 | — | ∼9.7 | ∼.84 | ∼9 | ∼.76 | ∼16 | ∼.57 | — | ∼26 | ∼.54 | ||
Force plate—CMJ | 0.95–1.00 | — | ∼9.8 | ∼.75 | ∼7 | ∼.85 | ∼16.5 | ∼.32 | — | ∼23.5 | ∼.45 | |||
Kotani et al4 | Force plate—SJ | — | CV ∼ 24.5% and ICC ∼ .48 |
Abbreviations: CMJ, countermovement jump; CV, coefficient of variation derived from SEM (in % of mean values); F0, maximal theoretical force; hpo, range of motion, ICC, intraclass coefficient; Pmax, maximal power output; SEM, standard error of measurement (in raw units); SFv, slope of the Fv relationship; SJ, squat jump; v0, maximal theoretical velocity.
aTask-experienced participants. bTask-inexperienced participants.
In contrast, 3 recent studies have reported poor reliability of FvP profile outputs (notably v0 and SFv) obtained during jumping using various methods, including computation-based and reference methods.2–4 The interpretations and conclusions provided by the authors clearly challenge both the methods and the FvP concept relevance: “The squat jump Fv [ . . . ] profiles established with a force plate are not reliable. Therefore, these profiles are not recommended to be used to inform programming decisions,”4 and “Coaches and researchers should be aware of the poor reliability of the Fv variables obtained from vertical jumping,”2 or “Fv variables [ . . . ] seemed to present a low between-day reliability.”3 Unfortunately, input measurement reliability and questionable testing procedures are largely overlooked as a potential cause of error in the “take-home” messages and conclusions.
The reliability of outcome variables in Fv relationships (and all testing protocols) varies as a function of their input measurements. To this end, there are several clear issues with the papers discussed earlier, including: mixing constrained and unconstrained movements within the same testing, lacking participant familiarity with unloaded/loaded jumps, large and potentially fatiguing testing volumes, inconsistent starting position (and so range of motion, hpo), and variable jump height (h) between test and retest (Table 1). For example, Valenzuela et al3 reported a starting position with an approximate resolution of ±1 to 3 cm and Lindberg et al2 a variability of 2 to 4 cm (ie, 5%–10%). Although the authors discussed this point as a potential explanation of their inferior reliability compared with previous studies, they still concluded that Fv profile variables are themselves generally unreliable. Of greater concern, Lindberg et al2 reported that intersession variability of h without additional load was on average 4.9% to 6.4% (ie, 1.9–2.5 cm) with 25% of the subjects exhibiting variability >8%2 (Figure 1). Comparable variability was reported by Kotani et al4 (∼3.5 cm; ∼10%) and Valenzuela et al3 (∼2.4 cm; ∼7.8%). Note that some of these variabilities (eg, when < ∼10%) can be acceptable when each measurement constitutes a strength output per se, independently and separately from the others, but can be too high when they are combined together in case of more integrative indexes based on several measurements in different conditions, as here for FvP profile variables. Both poorly standardized testing procedures and biological error can explain intersession variability in h, but in either case, the Fv relationship concept and the associated computational methods are not the (only) cause of output variability. The biological variation in ballistic capacity (ie, h) between 2 sessions is an interesting and important point to consider for FvP profiling in jumping; nevertheless, it was largely overlooked within abstracts and conclusions of these papers as a core explanation for poor reliability. Given the consistent acceptable reliability in jumping FvP relationships reported by various independent research groups8,13–16 (Table 2), an alternative conclusion should have considered the context in which the results were observed: If input data are highly variable—caused by poor testing procedures (eg, associated with field testing) or large biological variation (eg, population or context specific)—FvP outputs will be unreliable. This would have been the important message for sport practitioners.
More surprisingly, after concluding that FvP profile variables in jumping are unreliable,2 and subsequently cautioning their use, the same authors published a training study aiming to improve physical performance measures based on athletes’ strengths and weaknesses per an FvP profiling approach.20 Their results (based on unreliable indices, per their previously published work) clearly contrast previous studies21,22 and lead to confusing conclusions: They challenged the interest of considering the FvP profile in strength training without challenging the reliability of the indices on which training was individualized. In addition to affecting the reliability, a high variability in hpo can also affect the concurrent validity when comparing force plate measurements with computation method using a priori determined hpo values. For instance, Hicks et al23 reported high reliability in FvP outputs obtained from the computation method but lower concurrent validity with gold standard for some variables, which can be partly explained by the 4% to 5% variability observed in hpo.23
Sensitivity of FvP Profile Variables to Measurement Noise: Theoretical Simulation
Given the apparent confusion on this topic, we set out to clearly illustrate the association between input (procedure reliability) and output error. To this end, we generated a range of theoretical simulations of FvP outputs based on a validated biomechanical model of jumping.14,17,18,24 Our aim was to quantify the range in error in kinetic output variables (F0, v0, Pmax, and SFv) that might arise from noise in the 2 main kinematic input measurements (hpo and h, their magnitude drawn from previous studies). Note that if hpo and h are the 2 main inputs (in addition to body mass),24 they represent indices of procedural rigor independent from the FvP profile concept itself: hpo variability informs quality of task standardization, and h variability is associated with biological variation in performance, including variability caused by limited familiarization or submaximal intent. Moreover, the results of these simulations are broadly applicable as variability in h or squat depth is inevitably associated with error in FvP variables, whichever method is used to obtain them.
Theoretical simulations were based on 3000 virtual athletes (with F0 from 20 to 40 N/kg, v0 from 2 to 6 m/s, Pmax from 16 to 50 W/kg, and hpo from 0.25 to 0.45 m) performing 2 FvP jump tests with 5 loads (0%, 25%, 50%, 75%, and 100% body mass). The first simulated test was considered “perfect” (Fv curve r2 = 1): h obtained for each load was estimated from individual Fv relationships, body mass, and hpo values.17 In the second simulated test, random errors were included in hpo (simulating errors in squat depth standardization) and h for each loading condition (simulating biological variability) with different noise magnitudes: averaged raw error over all virtual subjects from 0 to 4.5 cm for hpo and h (ie, coefficient of variation from 0% to ∼13%). From these h and hpo values, push-off averaged force, velocity, and power were estimated and used to determine individual Fv relationships of the second test.14,18,24 Simulated athletes presenting an Fv relationship with r2 <.95 were removed, this liberal threshold corresponding to typical experimental guidelines.
The simulations showed that linear fit quality of the Fv regression decreased when variability in h and hpo increased: hpo and h variability < ∼4% to 5% seems acceptable to guarantee well-fitted Fv relationships, with only <10% of Fv relationships classified as first glance as unacceptable (ie, r² < .95, Figure 2G, 2H, and 2I). Once the nonacceptable Fv relationships were removed, the simulations showed that when aiming to estimate FvP outputs with typical error <10% (ie, a common threshold in sport science), the noise in hpo and h must be < ∼4% to 5% (ie, both ∼1 to 1.2 cm; Figures 2 and 3). Unsurprisingly, h and hpo variability in the 3 aforementioned studies is clearly higher than these theoretical thresholds. Consequently, inflated variability in output variables is partly caused by input variabilities that are too high to infer FvP variables, and not only by the FvP profile concept in jumping and the associated method, per se. The recent study of Fessl et al19 provides practical support to these simulation results, with less task-habituated athletes presenting greater variability in both h and FvP variables. Specifically, ski-jumping athletes, exhibiting variability in h <4.5% across different loads, presented variability in FvP variables <6.5%, and unexperienced sports students, exhibiting 3.0% to 8.3% variability in h, presented variability in FvP variables from 3.9% to 10.9% (Tables 1 and 2). Importantly, errors that would be otherwise acceptable when examining h in isolation can directly contribute to unacceptable error rates extrapolating FvP variables.
Relevance of Concepts, Validity of Methods, or Reliability of Measurements
More generally, the distinction between insufficient reliability in data measurements, validity of models or testing methods, and relevance of biomechanical and physiological concepts can be discussed whatever the physical qualities evaluated. For example, the earlier observations for FvP in jumping apply to sprint running12,25: Should split times or instantaneous velocity be measured unreliably, FvP output variables will be similarly unreliable, and vice versa. In the same manner, VO2max will appear unreliable with unreliable gas exchange measurements (eg, uncalibrated machine, athletes not presenting maximal effort, or other imprecise testing procedures), yet it would be misleading to conclude that the concept of VO2max itself is unreliable or flawed and subsequently broadly caution its use for athletic testing. Nevertheless, it is worth noting that the biological and technical/random variability is greater for mechanical outputs of the neuromuscular system estimated from different measurements performed in several conditions compared with kinematic data characterizing the task configuration and the effort performed by the athletes in each isolated condition—which, in this example, corresponds to FvP variables hpo and h, respectively. Moreover, whatever the methods used to determine the FvP relationship (eg, kinetic or kinematic measurements, dynamics or inverse dynamics approaches), greater variability is observed in force/acceleration data versus velocity/position data. However, despite often inevitable error inflation, kinetic data characterizing force production capacities of the neuromuscular system allow a more detailed exploration of factors underlying performance than kinematic data alone. Consequently, the interest of such approaches is in balancing the value brought from insight and error magnification. Although, in comparison with jump height alone, the interest of FvP relationship variables has been supported for comparing athletes’ force production capacities or for training individualization,26 experimenters should, nonetheless, be cautious about variability in input measurements. If not, the potential gain in information will be overpassed by the magnified error, as this seems to be the case in the different studies that concluded that FvP relationships are unreliable in general.2–4 The problem is not the FvP relationships themselves or the methods used but an unfavorable balance between measurement variabilities and levels of insight targeted.
Practical Applications
Before challenging concepts, approaches, models, or methods, balanced recommendations should list all sources of measurement uncertainty and provide suggestions to improve subsequent interventions. For FvP testing in jumping, noise can be reduced by averaging several trials per loading condition or considering best values among several trials, increasing the velocity range explored and the number of the experimental conditions used to draw the Fv relationship, using live feedback during testing to control the quality (r2) of the Fv relationship and redo incorrect trials (eg, by examining points with large residuals, notably if r2 < .95), following a warm-up that includes the range of loads to be used during measurement, and thoroughly standardizing starting position (eg, elastic bands or hard supports). More importantly, all participants must be thoroughly accustomed with maximal intent loaded jumps19 and follow stringent technical criteria (eg, no countermovement during squat jump, vertical motion only and landing with fully extended lower limbs when using flight time) to validate each trial. External encouragement should be maintained to ensure maximal effort while ensuring the number of trials/loads to not induce fatigue. When these different methodological considerations are scrupulously respected, h reliability of 2% to 5% within or between sessions can be observed19,27–30 as well as acceptable reliability in FvP variables.8,13,15,19
Some validated field methods do not require costly lab devices or complex data processing and can be easily applied out of the laboratory. However, these “simple” methods cannot escape the influence of biological variability, measurement noise, methodological differences in testing conditions, or unfamiliar subjects. Regardless of the perceived method simplicity, testing requires rigorous setting of procedures, timing (eg, day, week, month, season), and athlete and operator familiarization. These requirements are even more notable when determining integrative indexes based on several measurements, conditions, or model assumptions. Otherwise, the outputs will, at best, lack utility for practice and, at worst, risk being misleading or counterproductive for health, performance, and science. In other words, whatever the cooking method, it is impossible to create a tasty dish from spoiled ingredients.
Conclusion
Overlooking potential sources of error is an ever-present risk in reliability research. Unfortunately, the conclusions of several recent articles assessing the reliability of FvP relationship testing seem to focus almost exclusively on the basic concepts themselves and not on the measurements and model input reliability. Like many neuromuscular or physiological profiling methods, the FvP relationship, based on different measurements in several conditions, is unreliable when the input measurements (notably jumping impulse/height) are imprecise or when the testing conditions are not standardized sufficiently (eg, push-off distance, participants’ familiarization with testing conditions)—independent from the method used. As mechanical outputs of the neuromuscular system are estimated from different measurements and conditions, the measurement inaccuracies are magnified in FvP profiling. Consequently, extra caution is required to ensure precise data acquisition and accordingly acceptable output data. Finally, “simple” field-based measures and lab methods require the same methodological rigor, which does not imply that field testing is simple. Simple does not mean easy.
References
- 1.↑
Hill AV. The heat of shortening and the dynamic constants of muscle. Proc R Soc Lond B Biol Sci. 1938;126(843):136–195. doi:10.1098/rspb.1938.0050
- 2.↑
Lindberg K, Solberg P, Bjornsen T, et al. Force-velocity profiling in athletes: reliability and agreement across methods. PLoS One. 2021;16(2):e0245791. doi:10.1371/journal.pone.0245791
- 3.↑
Valenzuela PL, Sánchez-Martínez G, Torrontegi E, Vázquez-Carrion J, Montalvo Z, Haff GG. Should we base training prescription on the force—velocity profile? Exploratory study of its between-day reliability and differences between methods. Int J Sports Physiol Perform. 2021;16(7):1001–1007 doi:10.1123/ijspp.2020-0308
- 4.↑
Kotani Y, Lake J, Guppy SN, et al. Reliability of the squat jump force-velocity and load-velocity profiles. J Strength Cond Res. Published online May 5, 2021. doi:10.1519/JSC.0000000000004057
- 5.↑
Jaric S. Force-velocity relationship of muscles performing multi-joint maximum performance tasks. Int J Sports Med. 2015;36(9):699–704. doi:10.1055/s-0035-1547283
- 6.↑
Thorstensson A, Grimby G, Karlsson J. Force-velocity relations and fiber composition in human knee extensor muscles. J Appl Physiol. 1976;40(1):12–16. doi:10.1152/jappl.1976.40.1.12
- 7.↑
Sargeant AJ, Hoinville E, Young A. Maximum leg force and power output during short-term dynamic exercise. J Appl Physiol Respir Environ Exerc Physiol. 1981;51:1175–1182.
- 8.↑
Cuk I, Markovic M, Nedeljkovic A, Ugarkovic D, Kukolj M, Jaric S. Force-velocity relationship of leg extensors obtained from loaded and unloaded vertical jumps. Eur J Appl Physiol. 2014;114(8):1703–1714. doi:10.1007/s00421-014-2901-2
- 9.↑
Yamauchi J, Ishii N. Relations between force-velocity characteristics of the knee-hip extension movement and vertical jump performance. J Strength Cond Res. 2007;21(3):703–709. doi:10.1519/R-20516.1
- 10.↑
Garcia-Ramos A, Jaric S, Padial P, Feriche B. Force-velocity relationship of upper body muscles: traditional versus ballistic bench press. J Appl Biomech. 2016;32(2):178–185. doi:10.1123/jab.2015-0162
- 11.↑
Sprague RC, Martin JC, Davidson CJ, Farrar RP. Force-velocity and power-velocity relationships during maximal short-term rowing ergometry. Med Sci Sports Exerc. 2007;39(2):358–364. doi:10.1249/01.mss.0000241653.37876.73
- 12.↑
Morin JB, Samozino P, Murata M, Cross MR, Nagahara R. A simple method for computing sprint acceleration kinetics from running velocity data: replication study with improved design. J Biomech. 2019;94:82–87. doi:10.1016/j.jbiomech.2019.07.020
- 13.↑
Garcia-Ramos A, Jaric S, Perez-Castilla A, Padial P, Feriche B. Reliability and magnitude of mechanical variables assessed from unconstrained and constrained loaded countermovement jumps. Sports Biomech. 2017;16(4):514–526. doi:10.1080/14763141.2016.1246598
- 14.↑
Giroux C, Rabita G, Chollet D, Guilhem G. What is the best method for assessing lower limb force-velocity relationship? Int J Sports Med. 2015;36:143–149. doi:10.1055/s-0034-1385886
- 15.↑
Janicijevic D, Knezevic OM, Mirkov DM, et al. Assessment of the force-velocity relationship during vertical jumps: influence of the starting position, analysis procedures and number of loads. Eur J Sport Sci. 2020;20(5):614–623. doi:10.1080/17461391.2019.1645886
- 16.↑
Jimenez-Reyes P, Samozino P, Pareja-Blanco F, et al. Validity of a simple method for measuring force-velocity-power profile in countermovement jump. Int J Sports Physiol Perform. 2017;12(1):36–43. doi:10.1123/ijspp.2015-0484
- 17.↑
Samozino P, Rejc E, Di Prampero PE, Belli A, Morin JB. Optimal force-velocity profile in ballistic movements—altius: citius or fortius? Med Sci Sports Exerc. 2012;44(2):313–322. doi:10.1249/MSS.0b013e31822d757a
- 18.↑
Garcia-Ramos A, Perez-Castilla A, Morales-Artacho AJ, et al. Force-velocity relationship in the countermovement jump exercise assessed by different measurement methods. J Hum Kinet. 2019;67:37–47. doi:10.2478/hukin-2018-0085
- 19.↑
Fessl I, Wiesinger HP, Kroll J. Power-force-velocity profiling as a function of used loads and task experience. Int J Sports Physiol Perform. 2022;17(5):694–700. doi:10.1123/ijspp.2021-0325
- 20.↑
Lindberg K, Solberg P, Ronnestad BR, et al. Should we individualize training based on force-velocity profiling to improve physical performance in athletes? Scand J Med Sci Sports. 2021;31(12):2198–2210. doi:10.1111/sms.14044
- 21.↑
Jimenez-Reyes P, Samozino P, Brughelli M, Morin JB. Effectiveness of an individualized training based on force-velocity profiling during jumping. Front Physiol. 2016;7:677. doi:10.3389/fphys.2016.00677
- 22.↑
Simpson A, Waldron M, Cushion E, Tallent J. Optimised force-velocity training during pre-season enhances physical performance in professional rugby league players. J Sports Sci. 2021;39(1):91–100. doi:10.1080/02640414.2020.1805850
- 23.↑
Hicks DS, Drummond C, Williams KJ. Measurement agreement between Samozino’s Method and force plate force-velocity profiles during Barbell and Hexbar Countermovement Jumps. J Strength Cond Res. Published online October 15, 2021. doi:10.1519/JSC.0000000000004144
- 24.↑
Samozino P, Morin JB, Hintzy F, Belli A. A simple method for measuring force, velocity and power output during squat jump. J Biomech. 2008;41(14):2940–2945. doi:10.1016/j.jbiomech.2008.07.028
- 25.↑
Samozino P, Rabita G, Dorel S, et al. A simple method for measuring power, force, velocity properties, and mechanical effectiveness in sprint running. Scand J Med Sci Sports. 2016;26(6):648–658. doi:10.1111/sms.12490
- 26.↑
Morin JB, Jimenez-Reyes P, Brughelli M, Samozino P. When jump height is not a good indicator of lower limb maximal power output: theoretical demonstration, experimental evidence and practical solutions. Sports Med. 2019;49(7):999–1006. doi:10.1007/s40279-019-01073-1
- 27.↑
Balsalobre-Fernandez C, Glaister M, Lockey RA. The validity and reliability of an iPhone app for measuring vertical jump performance. J Sports Sci. 2015;33(15):1574–1579. doi:10.1080/02640414.2014.996184
- 28.
Markovic G, Dizdar D, Jukic I, Cardinale M. Reliability and factorial validity of squat and countermovement jump tests. J Strength Cond Res. 2004;18(3):551–555. doi:10.1519/1533-4287(2004)18<551:RAFVOS>2.0.CO;2
- 29.
Carroll KM, Wagle JP, Sole CJ, Stone MH. Intrasession and intersession reliability of countermovement jump testing in division-I volleyball athletes. J Strength Cond Res. 2019;33(11):2932–2935. doi:10.1519/JSC.0000000000003353
- 30.↑
Vieira A, Blazevich AJ, DA Costa AS, Tufano JJ, Bottaro M. Validity and test-retest reliability of the jumpo app for jump performance measurement. Int J Exerc Sci. 2021;14(7):677–686. PubMed ID: 34567382