Physical activity (PA) and sedentary behavior (SB) are important modifiable risk factors related to a range of health conditions, including mortality, cardiovascular and metabolic disease, and cancer (Biswas et al., 2015; Ekelund et al., 2016). Objective measures, using body-worn sensors, provide a detailed and accurate assessment of the amount of PA and SB undertaken by an individual in their daily life. In large-scale studies (e.g., n > 400; Wijndaele et al., 2015), use of self-report measures of both PA and SB are frequently justified for logistic rather than measurement considerations (Dall et al., 2017; Healy et al., 2011;). However, self-report measures typically overestimate PA (e.g., by 20 to 40 minutes per day; Schaller, Rudolf, Dejonghe, Grieben, & Froboese, 2016) and underestimate SB (e.g., by two to four hours per day; Dall et al., 2017) and may be measuring different constructs of physical behaviors compared with objective monitors (Troiano, McClain, Brychta, & Chen, 2014).
Using objective measurement of PA and SB in large-scale studies incurs practical and pragmatic challenges, different from the use of self-report, and often requires informed decisional trade-off between collecting large volumes of data and the utility and relevance of the outcomes that can be derived from such data. Costs are incurred both in terms of equipment (monitors and attachment consumables) and in terms of deployment and retrieval (staff costs, travel reimbursement, and postage; Matthews, Hagstromer, Pober, & Bowles, 2012). Data loss occurs through lack of compliance (people not wearing the monitor) and uncertainty about data utility (assurance that collected data reflects actual behavior). Data loss can result in a smaller sample size than anticipated and potential selection bias both in terms of the demographics of those who do comply with wear protocols, and in terms of which days are measured (Matthews et al., 2012).
The first large scale study to use objective monitoring was the National Health and Nutritional Examination Survey (NHANES, 2003-2004). Using a hip-worn ActiGraph monitor, a 68% compliance rate was achieved from returned monitors, with data loss from either monitors not calibrated on return (5%) or not worn for a minimum of four valid (10-hour) days (27%; Troiano et al., 2008); this did not include data lost from monitors that were not returned. To reduce data loss, recent large scale studies have attempted to increase compliance by opting for a wrist worn monitor. This was successful in increasing compliance from returned monitors (UK biobank, 93% providing three days of valid data (Doherty et al., 2017); 70–80% six days valid wear NHANES 2011-2012; Troiano et al., 2014). However, important concerns have been raised about the face validity of wrist worn monitors and their ability to provide accurate and interpretable measures of PA and SB, in particular time spent in postural sitting (Kooiman et al., 2015; Kozey-Keadle, Libertine, Lyden, Staudenmayer, & Freedson, 2011; Rosenberger et al., 2013). Thigh-worn monitors (such as activPAL), which are able to clearly distinguish postural sitting (Kozey-Keadle et al., 2011), have been previously used in some large studies but not in population cohort studies (e.g., Walking away from diabetes, n = 530 providing 67% with seven days valid wear; AusDiab n = 782 providing 79% with seven days valid wear; Edwardson et al., 2017). The debate is, of course, whether any potential loss of data quality from monitor wear location is justified in order to provide a larger and potentially more representative sample of free-living PA and SB, and whether compliance should be the main aspect of methodology considered worthy of investment.
Specific protocols for successful objective data collection, including a level of detail which would allow replication by other studies, and covering the entire measurement chain, are rarely published in peer-reviewed articles (Edwardson et al., 2017; Wijndaele et al., 2015). The purpose of this brief report is to share the principles and details of the objective data collection protocol of PA and SB from one study (Seniors USP [Understanding Sedentary Patterns]). The protocol relies not only on increasing adherence but also on ensuring wear and data quality.
Methods
Briefly, the Seniors USP study (Shaw et al., 2017) collected objective PA and SB (primary outcome measure) data for at least nine days (for seven-day analysis) using the activPAL3 monitor (PAL Technologies Ltd, Glasgow, UK), from older adults in three existing cohorts from within longitudinal studies (Lothian Birth Cohort 1936; Deary, Gow, Pattie, & Starr, 2012), West of Scotland Twenty-07 Study; Benzeval et al., 2009). The protocol and standard operating procedures (available at http://edshare.gcu.ac.uk/view/keywords/seniors%20usp%20sops.html) implemented a coherent package, which aimed to maximize both the volume and utility of the data collected. The key characteristics of the protocol were enabling 24-hour wear, minimizing data loss, and quality assurance. These key characteristics, along with details of the methods used to achieve them, are provided in Table 1.
Key Characteristics of the Methodology and Design of the Seniors USP Study Which Contribute to Objective Activity Data Quality and Compliance
Key Characteristic | Component of Methodology |
---|---|
Enabling 24-Hour Wear | Monitor selection |
• Monitor and wear location combination was selected which allows the monitor to be worn 24 hours a day for at least one week (activPAL3 [PAL technologies, Glasgow, UK] on the front of the thigh) | |
Waterproof | |
• Monitor heat sealed (P200-C heat sealer [Packer, Essex, UK] ) within Layflat plastic tubing (75 mm wide × 150 m long × 250 gauge [Packer, Essex, UK]) to eliminate reasons why the monitor may be removed (e.g., bathing, swimming) • Opsite flexifix [Smith&Nephew, London, UK] waterproof dressing placed over the sealed monitor | |
Reduce chances of skin irritation | |
• Appropriate materials used: hypoallergenic adhesive pad (PALstickie [PAL technologies, Glasgow, UK]), medical grade waterproof dressing (Opsite flexifix); food-safe plastic tubing (Packer layflat tubing) | |
Enhance comfort | |
• Hypoallergenic adhesive pad (PALstickie) provided padding between the skin and the monitor/waterproof tube, and reduced the likelihood of skin irritation | |
• Edges of the waterproof pouch trimmed with some to spare, to avoid hard corners which might dig in the skin | |
Schedule appointments to avoid removal | |
• Scheduling appointments when the participant knew they were having medical treatment (e.g., involving hospitalization), or were scheduled to fly (to avoid the need to remove for airport security) was actively avoided | |
Minimizing Data Loss | Test all monitors before starting data collection |
• All monitors tested (we had four researchers wearing 13 monitors per leg) for multiple days before using them for data collection • Monitors that were not functioning correctly identified and sought assistance from manufacturer | |
Set minimum battery level for programming | |
• 4.1v (value obtained through experience) was used as a minimum battery level for programming the monitor, to avoid data loss through monitor stopping recording | |
Use wide programming times | |
• Programmed to start recoding immediately, to allow confirmation that the monitor was collecting data (flashing green light) | |
• Programmed to record for 14 days (minimum required for full data collection was nine days) to allow for delays in starting to wear the monitor | |
trained staff applied the monitor | |
• Ensured the monitors were applied appropriately | |
• Ensured waterproof dressing was applied properly, minimizing potential for water ingress (and consequent data loss through removal or monitor stopping) | |
• Used a checklist on application, including re-checking monitor was recording data (flashing green light) | |
• Common misunderstandings/procedural shortcuts were pre-identified and addressed in training (e.g., highlighting tips and errors) | |
communication between participants and research staff | |
• A central contact point was provided for participants to discuss concerns with a study researcher, which reduced inappropriate monitor removal | |
communication between central experts and fieldworkers | |
• Key diagnostic data logged by staff applying/removing the monitor, for example battery level at programming and downloading, whether green light flashing at application | |
• Key diagnostic data recorded centrally (on a secure cloud server) allowing easy review by all staff | |
• Data recorded to increase compliance and allow identification of systematic errors/deviation from protocol and monitor malfunction | |
• Member of staff with experience of data collection using the monitor assigned to triage technical issues with using the monitor | |
• Early and continuous quality assurance checks allowed identification of individual and systematic deviations from protocol, immediate feedback to staff, and engagement in process | |
Reducing opportunities to lose monitor | |
• Monitor removed by researcher, which reduced reliance on participants for data retrieval, for example remembering to bring monitor to appointment, losing the monitor while not worn, accidentally washing monitor placed in pocket | |
Quality Assurance | Increasing confidence monitor was worn |
• Monitor removed by researcher, allowing confirmation monitor still worn after end of analysis period | |
• A message was provided to participants that monitor should not require re-attachment during data collection. Additional material to allow reattachment was not provided. Participants were not asked to prospectively record if monitor was not worn | |
• Assurance that monitor had not been reattached by participant was provided by using attachment materials that are not commonly available to participants | |
• In the case that the monitor was removed by participant prior to research appointment, we then asked retrospectively for date and time of removal. This was close to date of removal to allow for reasonable recall, and was then checked with data record. Data processing was from midnight-midnight and not from specific time of removal, so day/date of removal was sufficient information | |
Data inspection | |
• Routinely performed by a single researcher close to time collected; difficult cases resolved by discussion with a second researcher | |
• Hierarchical review process was used (weekly graphical display, daily graphical display, raw acceleration data), to speed up routine cases but maintain in-depth review when required | |
• Conducted with confidence that monitor was on the leg during data collection (i.e., looking for issues in battery/monitor failure, or thresholds not appropriate, e.g., known not to collect shuffling gait at slow speeds) |
Enabling 24-Hour Wear
Enabling a 24-hour monitor wear protocol minimized data loss due to participant compliance with reporting and/or identification of wear times; identifying and dealing with non-wear time is a source of data loss and debate in studies without a continuous wear protocol (Doherty et al., 2017; Edwardson et al., 2017). However, for studies using SB as an outcome measure, the trade-off is a requirement to identify sleep to allow removal of sleep time during data processing; we used paper diaries to record sleep/wake times. Monitor selection is crucial, as the location that the monitor is worn on the body must not only be comfortable and suitable for continuous wear, but also provide robust information about the behavior of interest. The activPAL3 provides a recognized gold standard measure of postural SB (Kozey-Keadle et al., 2011; Sellers, Dall, Grant, & Stansfield, 2016), and is worn on the front of the thigh and is suitable for long-term wear including overnight when using attachment materials to reduce skin irritation. Based on reported reasons for lack of compliance in previous studies, further improvements in compliance can be made by taking care to make the monitor attachment comfortable to wear, effective waterproofing, and careful scheduling of research appointments to avoid times the participant might be more likely to remove the monitor (e.g., flights).
Minimizing Data Loss
Data loss was minimized by adopting a protocol that reduced the likelihood and effects of monitor failure. At the start of the project, after receipt from the manufacturers and prior to being deployed in the field, each monitor was tested once to ensure it worked (individual calibration of activPAL monitors on each use is not required). Monitors were only programmed if they had a pre-defined minimum battery level. Wide programming limits (days recorded) including an immediate start, were selected to allow for minor variations in protocol and confirmation that the monitor was recording when attached. Eliminating extraneous data collected outside the study wear period is trivial in post-processing. Trained researchers attached the monitor, ensuring correct placement and reducing data loss through poor attachment. Although not strictly necessary, the monitor was also removed by trained staff; this reduced opportunities for loss through participant error and/or forgetfulness. Detailed standard operating procedures and staff training were developed to ensure consistent and effective implementation of the protocol. Communication was important. Participants were provided with a central study contact which allowed discussion of concerns and avoided unnecessary monitor removal. Additionally, reciprocal communication between fieldworkers and central research staff allowed the identification of deviations from protocol and procedure at monitor return, which could then be addressed through immediate feedback and/or additional training of fieldworkers.
Quality Assurance
The protocol was designed to provide confidence that the monitor was worn for the entire measurement period; only datasets with continuous wear for all included days were analyzed and therefore no data imputation was conducted. Attachment of the monitor with single-use attachment materials and removal of the monitor by a researcher allowed a high level of certainty of continuous monitor wear. Although it was possible that a monitor that was still worn on removal by the researcher had been removed and re-attached by the participant, reattachment with the single-use attachment materials is both difficult and noticeable. In addition, spare attachment materials were not provided to the participants, and use of attachment materials that were not commonly available to participants meant that any participant reattachment would be identifiable by the researcher removing the monitor. In cases where the monitor was removed early, participant report of date and time of removal was recorded retrospectively at the research appointment. This was considered acceptable as we required recall of a single removal event to the precision of the day on which it occurred. In contrast to many other studies (Edwardson et al., 2017), we did not provide spare attachment materials or ask participants to record removal times prospectively. These measures were specifically adopted to encourage the expectation that monitors should not be removed. Although this will have prevented legitimate reattachment of the monitor if it was removed, it was balanced against increased certainty of wear/compliance. On-going quality assurance, as monitors were returned, was conducted by a single experienced researcher, with complicated cases resolved through discussion with a second researcher. Quality assurance of downloaded data was conducted with certainty that the monitor had been worn, reducing the need to make assumptions about participant behavior (e.g., extended periods of sitting could be ascribed to the participant sitting, as the monitor was known to be worn). However, inconsistency with reported wear time and unusual data patterns were investigated in a hierarchical manner (week view, 24-hour view, and raw acceleration), and eliminated if a technical source for the discrepancy was identified.
Results
Forty-four percent of older adults approached to take part in the study agreed to wear a monitor. Only two of the monitors issued to participants (n = 773) were not returned; in both cases, the monitor was removed early by the participant and subsequently lost. In this study, we achieved 700 datasets (91% of the 771 returned monitors) included in analysis, with a very stringent inclusion criteria of 24-hour data and seven days of continuous wear; relaxing our inclusion criteria to four days of wear would have resulted in 97% of returned data included. Most data loss was attributed to early monitor removal (n = 48); no reason for removal was recorded in 16 cases. Ten participants removed the monitor for unavoidable reasons, including skin irritation (n = 8) and serious life events not related to wearing the monitor (e.g., bereavement, n = 2). Twelve monitors were removed early due to procedural failures, including failure of attachment materials (n = 8), water ingress under the dressing (n = 2) and appointment scheduling errors (n = 2). Ten participants removed the monitor early for their own convenience, for a variety of reasons, such as attending a night out, taking a last-minute holiday, or playing with a grandchild. Other reasons for data loss were: monitor failure (n = 11; n = 3 serious, e.g., data corruption; n = 8 stopped early, i.e., low battery); removed during quality assurance (n = 5, e.g., visible acceleration change in raw data did not trigger change in monitor categorization); and missing/incomplete sleep diary (only relevant to SB outcome measures, n = 7).
Discussion
In the Seniors USP study, 91% of datasets from returned monitors with full seven days data were included in analysis, achieving similar or higher proportion of data included from returned monitors whilst simultaneously including more days of data compared to national surveys using wrist-worn monitors (e.g., 93% including three days of data, UK biobank (Doherty et al., 2017); 60–80% including six days of data, NHANES; Troiano et al., 2014). Rates of agreement to wear the monitor (44%) in the current study were similar to uptake of the wrist-worn monitor from UK biobank (44%; Doherty et al., 2017). This also compares favorably to other large studies that used the activPAL monitor (e.g., 67% of n = 530 including seven days of data, Walking away from diabetes (Edwardson et al., 2017); 79% of n = 782 including seven days of data, AusDiab (Edwardson et al., 2017); 95% of n = 1506 including five days of data, ActiFE-Ulm; Klenk et al., 2015). Participants were all recruited from established longitudinal cohorts, and although this was the first occasion that cohort participants had been asked to wear an activity monitor, this may have made them more compliant with study procedures. Nevertheless, an extremely low number of monitors were not returned (2 out of 773), facilitated by encouraging the expectation that the monitor should not be removed, and asking participants to wear the monitor until a second research appointment. This also removed the burden of remembering to wear the monitor from the participant. A number of decisional trade-offs are apparent in our protocol. Specifying and encouraging 24-hour wear allowed continuous wear and certainty that the data reflected behavior; just five datasets were rejected during quality assurance. However, this was off-set by the need in this study to remove sleep from analysis, as the primary outcome measure was SB. We selected use of a paper diary, and lost seven sets of data through incomplete diaries. Automatic algorithms to detect time in bed overnight (Winkler et al., 2016) and distinguish lying from sitting (Lyden, John, Dall, & Granat, 2016) are being developed, and may allow inclusion of this data in future studies. Additionally, PA data from the monitors could have been included in analysis. Provision of additional materials to allow participants to reattach monitors during data collection is common practice in many studies (Edwardson et al., 2017). We did not provide additional attachment materials to participants, and lost 10 datasets through poor initial attachment of the monitor or degradation during use (falling off or water ingress); this should be balanced against encouraging an implicit expectation of continuous wear in participants, and the loss of only five datasets during data assurance. The effectiveness (balance of sources of data loss) of not providing spare attachment materials may vary by study and population, depending on data collection duration and patterns of attachment degradation during use. Monitor attachment by a researcher (as opposed to by the participant) may have contributed to secure attachment, including familiarity with the materials. However, some data loss was unavoidable as monitors were removed for medical reasons (skin irritation) or because of serious life events. This will represent a source of data loss in any study. Data sets lost through early monitor removal for participant convenience are not unavoidable, but represent participant choice about compliance. Potentially, this could be addressed through communication of expectations; however, eight out of the ten data sets lost provided six days of data, which would have been included in other studies. Aspects of the objective measurement of PA/SB, for example monitor removal for convenience, degradation of attachment materials during wear, and remembering to adhere to study protocols, are likely to be affected by the population being studied. The generalizability of the components of the Seniors USP study protocol to other populations should be explored in future studies.
In studies wishing to assess the PA and SB of their participants, there is a clear need for the objective measurement of both PA and SB. However collecting objective measures of posture and movement in very large studies (e.g., UK Biobank, n = ∼100,000; Doherty et al., 2017), is difficult and requires adequate investment. It is important to clarify in which procedural aspects to invest. The protocol described here (available from http://edshare.gcu.ac.uk/view/keywords/seniors%20usp%20sops.html), was successful in a study of 773 participants, and has been adopted for use by larger studies (e.g., British Cohort Study 70, cohort n = 17,000) (Elliott, & Shepherd, 2006), demonstrating the potential for scaling up, although performance at that scale has not yet been evaluated. Although we report on individual items, it is important to understand that it is their combination that makes the protocol successful. In developing the protocol we took a holistic approach integrating the whole measurement and analysis chain, and taking some elements without understanding the co-dependency of the items might not be as effective. The protocol components which incurred the highest costs were the staff costs to allow monitor attachment and removal by a researcher at separate appointments. It is acknowledged that these might be the most difficult and costly components to increase in scale for larger studies. However, some large national surveys have face-to-face research appointments to collect other data (e.g., UK Biobank prior to the activity monitoring component) (Doherty et al., 2017), and it is feasible that monitor attachment could be integrated into such appointments. Additionally, staff costs of research appointments should be offset against the costs involved in purchasing additional monitors to cover monitor losses/non-return, which can be substantial. In the Seniors USP study, the purchase of a single additional monitor would have covered the costs of 20 research visits. This investment in the Seniors USP study, particularly the second appointment for monitor removal by a researcher, resulted in an extremely small number of monitors not being returned, which may represent the ideal scenario for reducing selection bias from consented participants. Alternative strategies, such as incentives paid to the participant on monitor return, may partially compensate for monitors not returned through lack of participant engagement, however they are less able to compensate for monitors not returned because they are lost or damaged after removal by the participant or during transit in the postal service.
In summary, there is growing research demonstrating that the objective measurement of physical activity and sedentary behavior in large studies is feasible with a range of different monitors. Decisional trade-offs are made in protocols between data quantity (collecting representative data) and utility (derived outcomes that reflect actual behavior). Paying increased attention to reporting the explicit methodological details of monitor use, across a wide range of studies, will allow future researchers to make appropriate and informed methodological decisions.
Acknowledgments
The named authors present the study on behalf of the Seniors USP Team, which comprises Dawn A Skelton (PI), Sebastien Chastin, Simon Cox, Elaine Coulter, Iva Čukić, Philippa Dall, Ian Deary, Geoff Der, Manon Dontje, Claire Fitzsimons, Catharine Gale, Jason Gill, Malcolm Granat, Cindy Gray, Carolyn Greig, Elaine Hindle, Karen Laird, Gillian Mead, Nanette Mutrie, Victoria Palmer, Ratko Radakovic, Naveed Sattar, Richard Shaw, John Starr, Sally Stewart, and Sally Wyke. The Seniors USP (Understanding Sedentary Patterns) project is funded by the UK Medical Research Council (MRC) as part of the Lifelong Health and Wellbeing Initiative (LLHW) [MR/K025023/1]. The Lothian Birth Cohort 1936 (LBC1936) thank the cohort members, investigators, research associates, and team members. We also thank the radiographers at the Brain Research Imaging Centre, and the research nurses and Genetics Core staff at the Wellcome Trust Clinical Research Facility. LBC1936 data collection is supported by the Disconnected Mind project (funded by Age UK and MRC [Mr/M01311/1 and G1001245/96077]) and undertaken within the University of Edinburgh Centre for Cognitive Ageing and Cognitive Epidemiology (funded by the BBSRC and MRC as part of the LLHW [MR/K026992/1]). The West of Scotland Twenty-07 Study was funded by the MRC and the data were originally collected by the MRC Social and Public Health Sciences Unit (MC_A540_53462). We thank all of the cohort participants, and the survey staff and research nurses who carried it out. The data are employed here with the permission of the Twenty-07 Steering Committee.
References
Benzeval, M., Der, G., Ellaway, A., Hunt, K., Sweeting, H., West, P., … Macintyre, S. (2009). Cohort Profile: West of Scotland Twenty-07 Study: Health in the Community. International Journal of Epidemiology, 38(5), 1215–1223. PubMed doi:10.1093/ije/dyn213
Biswas, A., Oh, P.I., Faulkner, G.E., Bajaj, R.R., Silver, M.A., Mitchell, M.S., … Alter, D.A. (2015). Sedentary time and its association with risk for disease incidence, mortality, and hospitalization in adults: a systematic review and meta-analysis. Annals of Internal Medicine, 162, 123–32. PubMed doi:10.7326/M14-1651
Dall, P.M., Coulter, E.H., Fitzsimons, C.F., Skelton, D.A., Chastin, S., & Seniors USP Team (2017). TAxonomy of Self-reported Sedentary behaviour Tools (TASST) framework for development, comparison and evaluation of self-report tools: content analysis and systematic review. BMJ Open, 7(4), 013844. PubMed doi:10.1136/bmjopen-2016-013844
Deary, I.J., Gow, A.J., Pattie, A., & Starr, J.M. (2012). Cohort Profile: The Lothian Birth Cohorts of 1921 and 1936. International Journal of Epidemiology, 41(6), 1576–1584. PubMed doi:10.1093/ije/dyr197
Doherty, A., Jackson, D., Hammerla, N., Plötz, T., Olivier, P., Granat, M.H., … Wareham, N.J. (2017). Large scale population assessment of physical activity using wrist worn accelerometers: The UK Biobank Study. PLoS ONE, 12(2), e0169649. PubMed doi:10.1371/journal.pone.0169649
Edwardson, C.L., Winkler, E.A., Bodicoat, D.H., Yates, T., Davies, M.L., Dunstan, D.W., … Healy, G.N. (2017). Considerations when using the activPAL monitor in field-based research with adult populations. Journal of Sport and Health Science, 6(2), 162–178. doi:10.1016/j.jshs.2016.02.002
Ekelund, U., Steene-Johannessen, J., Brown, W.J., Fagerland, M.W., Owen, N., Powell, K.E., & Lancet Sedentary Behaviour Working Group (2016). Does physical activity attenuate, or even eliminate, the detrimental association of sitting time with mortality? A harmonised meta-analysis of data from more than 1 million men and women. Lancet, 388(10051), 1302–1310. PubMed doi:10.1016/S0140-6736(16)30370-1
Elliott, J., & Shepherd, P. (2006). Cohort Profile: 1970 British Birth Cohort (BCS1970). International Journal of Epidemiology, 35, 836–843. PubMed doi:10.1093/ije/dyl174
Healy, G.N., Clark, B.K., Winkler, E.A.H., Gardiner, P.A., Brown, W.J., & Matthews, C.E. (2011). Measurement of adults’ sedentary time in population-based studies. American Journal of Preventive Medicine, 41(2), 216–227. PubMed doi:10.1016/j.amepre.2011.05.005
Klenk, J., Kerse, N., Rapp, K., Nikolaus, T., Becker, C., Rothenbacher, D., & ActiFE Study Group (2015). Physical activity and different concepts of fall risk estimation in older people– results of the ActiFE-Ulm study. PLoS ONE, 10, e0129098. doi:10.1371/journal.pone.0129098
Kooiman, T.J.M., Dontje, M.L., Sprenger, S.R., Krijnen, W.P., van der Schans, C.P., & de Groot, M. (2015). Reliability and validity of ten consumer activity trackers. BMC Sports Science, Medicine & Rehabilitation, 7, 24. PubMed doi:10.1186/s13102-015-0018-5
Kozey-Keadle, S., Libertine, A., Lyden, K., Staudenmayer, J., & Freedson, P.S. (2011). Validation of wearable monitors for assessing sedentary behavior. Medicine & Science in Sports & Exercise, 43(8), 1561–1567. PubMed doi:10.1249/MSS.0b013e31820ce174
Lyden, K., John, D., Dall, P., & Granat, M.H. (2016). Differentiating sitting and lying using a thigh-worn accelerometer. Medicine & Science in Sports & Exercise, 48(4), 742–747. PubMed doi:10.1249/MSS.0000000000000804
Matthews, C., Hagstromer, M., Pober, D.M., & Bowles, H.R. (2012). Best practices for using physical activity monitors in population-based research. Medicine & Science in Sports & Exercise, 44, S68–76. PubMed doi:10.1249/MSS.0b013e3182399e5b
Rosenberger, M.E., Haskell, W.L., Albinali, F., Mota, S., Nawyn, J., & Intille, S. (2013). Estimating activity and sedentary behaviour from an accelerometer on the hip or wrist. Medicine & Science in Sports & Exercise, 45(5), 964–975. PubMed doi:10.1249/MSS.0b013e31827f0d9c
Schaller, A., Rudolf, K., Dejonghe, L., Grieben, C., & Froboese, I. (2016). Influencing factors on the overestimation of self-reported physical activity: A cross-sectional analysis of low back pain patients and healthy controls. BioMed Research International, 2016, 1–11. PubMed doi:10.1155/2016/1497213
Sellers, C., Dall, P.M., Grant, P.M., & Stansfield, B. (2016). Validity and reliability of the activPAL3 for measuring posture and stepping in adults and young people. Gait & Posture, 43, 42–47. PubMed doi:10.1016/j.gaitpost.2015.10.020
Shaw, R.J, Čukić, I, Deary, I.J, Gale, C.R., Chastin, S.F.M., Dall, P.M., … Der, G. (2017). The influence of neighbourhoods and the social environment on sedentary behaviour in older adults in three prospective studies. International Journal of Environmental Research and Public Health, 14, 557. doi:10.3390/ijerph14060557
Troiano, R.P., Berrigan, D., Dodd, K.W., Mâsse, L.C., Tilert, T., & McDowell, M. (2008). Physical activity in the United States measured by accelerometer. Medicine & Science in Sports & Exercise, 40, 181–188. PubMed doi:10.1249/mss.0b013e31815a51b3
Troiano, R.P., McClain, J.J., Brychta, R.J., & Chen, K.Y. (2014). Evolution of accelerometer methods for physical activity research. British Journal of Sports Medicine, 48, 1019–1023. PubMed doi:10.1136/bjsports-2014-093546
Wijndaele, K., Westgate, K., Stephens, S.K., Blair, S.N., Bull, F.C., Chastin, S.F., … Healy, G.N. (2015). Utilization and harmonization of adult accelerometry data: Review and expert consensus. Medicine & Science in Sports & Exercise, 47(10), 2129–2139. PubMed doi:10.1249/MSS.0000000000000661
Winkler, E.A., Bodicoat, D.H., Healy, G.N., Bakrania, K., Yates, T., Owen, N., … Edwardson, C.L. (2016). Identifying adults’ valid waking wear time by automated estimation in activPAL data collected with a 24 h wear protocol. Physiological Measurement, 37(10), 1653–1668. PubMed doi:10.1088/0967-3334/37/10/1653