The balance error scoring system (BESS) is a human-scored, field-based balance test used in cases of suspected concussion. Recently developed instrumented alternatives to human scoring carry substantial advantages over traditional testing, but thus far report relatively abstract outcomes that may not be useful to clinicians or coaches. In contrast, the automated assessment of postural stability (AAPS) is a computerized system that tabulates error events in accordance with the original description of the BESS. This study compared AAPS and human-based BESS scores. A total of 25 healthy adults performed the modified BESS. Tests were scored twice each by 3 human raters and the computerized system. Interrater (between human) and intermethod (AAPS vs human) agreement (interclass correlation coefficient2,1) were calculated alongside Bland–Altman limits of agreement. Interrater analyses were significant (P < .01) and demonstrated good to excellent agreement. Intermethod agreement analyses were significant (P < .01), with agreement ranging from poor to excellent. Computerized scores were equivalent across rating occasions. Limits of agreement ranges for AAPS versus the human average exceeded the average limits of agreement ranges between human raters. Coaches and clinicians may consider a system such as AAPS to automate balance testing while maintaining the familiarity of human-based scoring, although scores should not yet be considered interchangeable with those of a human rater.
Glass is with the Department of Otolaryngology, The Ohio State University, Columbus, OH. Napoli, Obeid, and Tucker are with the Department of Electrical and Computer Engineering, Temple University College of Engineering, Philadelphia, PA. Thompson and Tucker are with the Department of Physical Therapy, Temple University College of Public Health, Philadelphia, PA.