Search Results

You are looking at 1 - 10 of 16 items for

  • Author: Alexander H.K. Montoye x
  • Refine by Access: All Content x
Clear All Modify Search
Restricted access

Heart Rate Alters, But Does Not Improve, Calorie Predictions in Fitbit Activity Monitors

Alexander H.K. Montoye, John Vusich, John Mitrzyk, and Matt Wiersma

Background: Consumer-based activity monitors use accelerometers to estimate Calories (kcals), but it is unknown if monitors measuring heart rate (HR) use HR in kcal prediction. Purpose: Determine if there is a difference in kcal estimations in Fitbits measuring HR compared to those not measuring HR. Methods: Participants (n = 23) wore five Fitbits and performed nine activities for five minutes each, split into four groupings (G1: sitting, standing, cycling 50–150W; G2: level (0%) and inclined (10%) walking at 1.1 m/s; G3: level (0%) and inclined (10%) walking at 1.4 m/s; G4: level (0%) and inclined (3%) jogging at 2.2–4.5 m/s) in the laboratory. Three Fitbits (Blaze, Charge HR, Alta HR) assessed steps, HR, and kcals, and two Fitbits (Alta, Flex2) assessed steps and kcals. Steps, HR, and kcals data from the Fitbits were compared to criterion measures and between Fitbits measuring HR and Fitbits without HR. Results: Fitbits with HR had significantly higher kcal predictions (10.5–23.8% higher, p < .05) during inclined compared to level activities in G2–G4, whereas Fitbits without HR had similar kcal estimates between level and inclined activities. Mean absolute percent errors for kcal predictions were similar for Fitbits measuring HR (33.7–38.3%) and Fitbits without HR (32.4–36.6%). Conclusion: Fitbits measuring HR appear to use HR when predicting kcals. However, kcal prediction accuracies were similarly poor compared to Fitbits without HR compared to criterion measures.

Restricted access

Systematic Review of Accelerometer Responsiveness to Change for Measuring Physical Activity, Sedentary Behavior, or Sleep

Kimberly A. Clevenger and Alexander H.K. Montoye

Measurement of 24-hr movement behaviors is important for assessing adherence to guidelines, participation trends over time, group differences, and whether health-promoting interventions are successful. For a measurement tool to be useful, it must be valid, reliable, and able to detect change, the latter being a measurement property called responsiveness, sensitivity to change, or longitudinal validity. We systematically reviewed literature on the responsiveness of accelerometers to detect change in 24-hr movement behaviors. Databases (PubMed, Scopus, and EBSCOHost) were searched for peer-reviewed papers published in English between 1998 and 2023. Quality/risk of bias was assessed using a customized tool. This study is registered at https://osf.io/qrn8a. Twenty-six papers met the inclusion/exclusion criteria with an overall sample of 1,939 participants. Narrative synthesis was used. Most studies focused on adults (n = 21), and almost half (n = 12) included individuals with specific medical conditions. Studies primarily took place in free-living settings (n = 21) and used research-grade accelerometers (n = 24) worn on the hip (n = 18), thigh (n = 7), or wrist (n = 9). Outcomes included physical activity (n = 19), sedentary time/behavior (n = 12), or sleep (n = 2) and were calculated using proprietary formulas (e.g., Fitbit algorithm), cut points, and/or count-based methods. Most studies calculated responsiveness by comparing before versus after an intervention (n = 16). Six studies included a criterion measure to confirm that changes occurred. Limited research is available on the responsiveness of accelerometers for detecting change in 24-hr movement behaviors, particularly in youth populations, for sleep outcomes, and for commercial and thigh- or wrist-worn devices. Lack of a criterion measure precludes conclusions about the responsiveness even in more frequently studied outcomes/populations.

Restricted access

Utility of Activity Monitors and Thermometry in Assessing Sleep Stages and Sleep Quality

Natashia Swalve, Brianna Harfmann, John Mitrzyk, and Alexander H. K. Montoye

Activity monitors provide an inexpensive and convenient way to measure sleep, yet relatively few studies have been conducted to validate the use of these devices in examining measures of sleep quality or sleep stages and if other measures, such as thermometry, could inform their accuracy. The purpose of this study was to compare one research-grade and four consumer-grade activity monitors on measures of sleep quality (sleep efficiency, sleep onset latency, and wake after sleep onset) and sleep stages (awake, sleep, light, deep, REM) against an electroencephalography criterion. The use of a skin temperature device was also explored to ascertain whether skin temperature monitoring may provide additional data to increase the accuracy of sleep determination. Twenty adults stayed overnight in a sleep laboratory during which sleep was assessed using electroencephalography and compared to data concurrently collected by five activity monitors (research-grade: ActiGraph GT9X Link; consumer-grade: Fitbit Charge HR, Fitbit Flex, Jawbone UP4, Misfit Flash) and a skin temperature sensor (iButton). The majority of the consumer-grade devices overestimated total sleep time and sleep efficiency while underestimating sleep onset latency, wake after sleep onset, and number of awakenings during the night, with similar results being seen in the research-grade device. The Jawbone UP4 performed better than both the consumer- and research-grade devices, having high levels of agreement overall and in epoch-by-epoch sleep stage data. Changes in temperature were moderately correlated with sleep stages, suggesting that addition of skin temperature could increase the validity of activity monitors in sleep measurement.

Restricted access

Accuracy of Physical Activity Monitors for Steps and Calorie Measurement During Pregnancy Walking

Alexander H.K. Montoye, Jordana Dahmen, Nigel Campbell, and Christopher P. Connolly

Purpose: This purpose of this study was to validate consumer-based and research-grade PA monitors for step counting and Calorie expenditure during treadmill walking. Methods: Participants (n = 40, 24 in second trimester and 16 in third trimester) completed five 2-minute walking activities (1.5–3.5 miles/hour in 0.5 mile/hour increments) while wearing five PA monitors (right hip: ActiGraph Link [AG]; left hip: Omron HJ-720 [OM]; left front pants pocket: New Lifestyles NL 2000 [NL]; non-dominant wrist: Fitbit Flex [FF]; right ankle: StepWatch [SW]). Mean absolute percent error (MAPE) was used to determine device accuracy for step counting (all monitors) and Calorie expenditure (AG with Freedson equations and FF) compared to criterion measures (hand tally for steps, indirect Calorimetry for Calories). Results: For step counting, the SW had MAPE ≤ 10% at all walking speeds, and the OM and NL had MAPE ≤ 10% for all speeds but 1.5 miles/hour. The AG had MAPE ≤ 10% for only 3.0–3.5 miles/hour speeds, and the FF had high MAPE for all speeds. For Calories, the FF and AG had MAPE > 10% for all speeds, with the FF overestimating Calories expended. Trimester did not affect PA monitor accuracy for step counting but did affect accuracy for Calorie expenditure. Conclusion: The ankle-worn SW and hip-worn OM had high accuracy for measuring step counts at all treadmill walking speeds, whereas the NL had high accuracy for speeds ≥2.0 miles/hour. Conversely, the monitors tested for Calorie expenditure have poor accuracy and should be interpreted cautiously for walking behavior.

Restricted access

Understanding Physical Behaviors During Periods of Accelerometer Wear and Nonwear in College Students

Alexander H.K. Montoye, Kimberly A. Clevenger, Benjamin D. Boudreaux, and Michael D. Schmidt

Accelerometers are increasingly used to measure 24-hr movement behaviors but are sometimes removed intermittently (e.g., for sleep or bathing), resulting in missing data. This study compared physical behaviors between times a hip-placed accelerometer was worn versus not worn in a college student sample. Participants (n = 115) wore a hip-placed ActiGraph during waking times and a thigh-placed activPAL continuously for at least 7 days (mean ± SD 7.5 ± 1.1 days). Thirteen nonwear algorithms determined ActiGraph nonwear; days included in the analysis had to have at least 1 min where the ActiGraph classified nonwear while participant was classified as awake by the activPAL. activPAL data for steps, time in sedentary behaviors (SB), light-intensity physical activity (LPA), and moderate- to vigorous-intensity physical activity (MVPA) from ActiGraph wear times were then compared with activPAL data from ActiGraph nonwear times. Participants took more steps (10.2–11.8 steps/min) and had higher proportions of MVPA (5.0%–5.9%) during ActiGraph wear time than nonwear time (3.1–8.0 steps/min, 0.8%–1.3% in MVPA). Effects were variable for SB (62.6%–66.9% of wear, 45.5%–76.2% of nonwear) and LPA (28.2%–31.5% of wear, 23.0%–53.2% of nonwear) depending on nonwear algorithm. Rescaling to a 12-hr day reduced SB and LPA error but increased MVPA error. Requiring minimum wear time (e.g., 600 min/day) reduced error but resulted in 10%–22% of days removed as invalid. In conclusion, missing data had minimal effect on MVPA but resulted in underestimation of SB and LPA. Strategies like scaling SB and LPA, but not MVPA, may improve physical behavior estimates from incomplete accelerometer data.

Restricted access

Validity of a Wrist-Worn Activity Monitor During Resistance Training Exercises at Different Movement Speeds

Scott A. Conger, Alexander H.K. Montoye, Olivia Anderson, Danielle E. Boss, and Jeremy A. Steeves

Speed of movement has been shown to affect the validity of physical activity (PA) monitors during locomotion. Speed of movement may also affect the validity of accelerometer-based PA monitors during other types of exercise. Purpose: To assess the ability of the Atlas Wearables Wristband2 (a PA monitor developed specifically for resistance training [RT] exercise) to identify the individual RT exercise type and count repetitions during RT exercises at various movement speeds. Methods: 50 male and female participants completed seven sets of 10 repetitions for five different upper/lower body RT exercises while wearing a Wristband2 on the left wrist. The speed of each set was completed at different metronome-paced speeds ranging from a slow speed of 4 sec·rep−1 to a fast speed of 1 sec·rep−1. Repeated Measures ANOVAs were used to compare the actual exercise type/number of repetitions among the seven different speeds. Mean absolute percent error (MAPE) and bias were calculated for repetition counting. Results: For each exercise, there tended to be significant differences between the slower speeds and the fastest speed for activity type identification and repetition counting (p < .05). Across all exercises, the highest accuracy for activity type identification (91 ± 1.8% correct overall), repetition counting (8.77 ± 0.17 of 10 reps overall) and the lowest MAPE (14 ± 1.7% overall) and bias (−1.23 ± 0.17 reps overall) occurred during the 1.5 sec·rep−1 speed (the second fastest speed tested). Conclusions: The validity of the Atlas Wearables Wristband2 to identify exercise type and count repetitions varied based on the speed of movement during RT exercises.

Restricted access

Physical Activity Monitor Accuracy for Overground Walking and Free-Living Conditions Among Pregnant Women

Christopher P. Connolly, Jordana Dahmen, Robert D. Catena, Nigel Campbell, and Alexander H.K. Montoye

Purpose: We aimed to determine the step-count validity of commonly used physical activity monitors for pregnancy overground walking and during free-living conditions. Methods: Participants (n = 39, 12–38 weeks gestational age) completed six 100-step overground walking trials (three self-selected “normal pace”, three “brisk pace”) while wearing five physical activity monitors: Omron HJ-720 (OM), New Lifestyles 2000 (NL), Fitbit Flex (FF), ActiGraph Link (AG), and Modus StepWatch (SW). For each walking trial, monitor-recorded steps and criterion-measured steps were assessed. Participants also wore all activity monitors for an extended free-living period (72 hours), with the SW used as the criterion device. Mean absolute percent error (MAPE) was calculated for overground walking and free-living protocols and compared across monitors. Results: For overground walking, the OM, NL, and SW performed well (<5% MAPE) for normal and brisk pace walking trials, and also when trials were analyzed by actual speeds. The AG and FF had significantly greater MAPE for overground walking trials (11.9–14.7%). Trimester did affect device accuracy to some degree for the AG, FF, and SW, with error being lower in the third trimester compared to the second. For the free-living period, the OM, NL, AG, and FF significantly underestimated (>32% MAPE) actual steps taken per day as measured by the criterion SW (M [SD] = 9,350 [3,910]). MAPE for the OM was particularly high (45.3%). Conclusion: The OM, NL, and SW monitors are valid measures for overground step-counting during pregnancy walking. However, the OM and NL significantly underestimate steps by second and third trimester pregnant women in free-living conditions.

Free access

Reactions From the Experts: Implications of Open-Source ActiGraph Counts for Analyzing Accelerometer Data

Alexander H.K. Montoye, Samuel R. LaMunion, Jan C. Brønd, and Kimberly A. Clevenger

In 2022, it became possible to produce ActiGraph counts from raw accelerometer data without use of ActiLife software. This supports the availability and use of transparent, open-source methods for producing physical behavior outcomes from accelerometer data. However, questions remain regarding the implications of the availability of open-source ActiGraph counts. This Expert Question and Answer paper solicited and summarized feedback from several noted physical behavior measurement experts on five questions related to open-source counts. The experts agreed that open-source, transparent, and translatable methods help with harmonization of accelerometer methods. However, there were mixed views as to the importance of open-source counts and their place in the field moving forward. This Expert Question and Answer provides initial feedback, but more research both within this special issue and to be conducted moving forward will help to inform whether and how open-source counts will be accepted and adopted for use for device-based physical behavior assessments.

Restricted access

Comparability of 24-hr Activity Cycle Outputs From ActiGraph Counts Generated in ActiLife and RStudio

Alexander H.K. Montoye, Kimberly A. Clevenger, Benjamin D. Boudreaux, and Michael D. Schmidt

Data from ActiGraph accelerometers have long been imported into ActiLife software, where the company’s proprietary “activity counts” were generated in order to understand physical behavior metrics. In 2022, ActiGraph released an open-source method to generate activity counts from any raw, triaxial accelerometer data using Python, which has been translated into RStudio packages. However, it is unclear if outcomes are comparable when generated in ActiLife and RStudio. Therefore, the authors’ technical note systematically compared activity counts and related physical behavior metrics generated from ActiGraph accelerometer data using ActiLife or available packages in RStudio and provides example code to ease implementation of such analyses in RStudio. In addition to comparing triaxial activity counts, physical behavior outputs (sleep, sedentary behavior, light-intensity physical activity, and moderate- to vigorous-intensity physical activity) were compared using multiple nonwear algorithms, epochs, cut points, sleep scoring algorithms, and accelerometer placement sites. Activity counts and physical behavior outcomes were largely the same between ActiLife and the tested packages in RStudio. However, peculiarities in the application of nonwear algorithms to the first and last portions of a data file (that occurred on partial, first or last days of data collection), differences in rounding, and handling of counts values on the borderline of activity intensities resulted in small but inconsequential differences in some files. The hope is that researchers and both hardware and software manufacturers continue to push efforts toward transparency in data analysis and interpretation, which will enhance comparability across devices and studies and help to advance fields examining links between physical behavior and health.

Restricted access

Using the Wrist-Worn Atlas Wristband2 Monitor to Objectively Measure Resistance Training Exercises

Jeremy A. Steeves, Scott A. Conger, Joe R. Mitrzyk, Trevor A. Perry, Elise Flanagan, Alecia K. Fox, Trystan Weisinger, and Alexander H.K. Montoye

Background: Devices for monitoring physical activity have focused mainly on measuring aerobic activity; however, the 2018 Physical Activity Guidelines for Americans also recommend muscle-resistance training two or more days per week. Recently, a wrist-worn activity monitor, the Atlas Wristband2, was developed to recognize resistance training exercises. Purpose: To assess the ability of the Wristband2 to identify the type and number of repetitions of resistance training exercises, when worn on the left wrist as directed by the manufacturer, and when worn on the right wrist. Methods: While wearing monitors on both wrists, 159 participants completed a circuit-style workout consisting of two sets of 12 repetitions of 14 different resistance training exercises. Data from the monitors were used to determine classification accuracies for identifying exercise type verses direct observation. The average repetitions and mean absolute error (MAE) for repetitions were calculated for each exercise. Results: The Wristband2 classification accuracy for exercise type was 78.4 ± 2.5%, ranging from 54.7 ± 3.4% (dumbbell [DB] bench press) to 97.5 ± 1.0% (DB biceps curls), when worn on the left wrist. An average of 11.0 ± 0.2 repetitions, ranging from 9.0 ± 0.3 repetitions (DB lunges) to 11.9 ± 0.1 repetitions (push-ups), were identified. For all exercises, MAE ranged from 0.0–4.6 repetitions. When worn on the right wrist, exercise type classification accuracy dropped to 24.2 ± 5.1%, and repetitions decreased to 8.1 ± 0.8 out of 12. Conclusions: The Wristband2, worn on the left wrist, had acceptable exercise classification and repetition counting capabilities for many of the 14 exercises used in this study, and may be a useful tool to objectively track resistance training.