Two Grammy awards. The most downloaded song ever in the United Kingdom. The best-selling song of 2014 in the United States. The blueprint for success for Pharrell Williams’ hit song “Happy”? Nine fully recorded songs tried, tested, and discarded beforehand. So, what made version 10 a home run?
Let’s start with why it took so long to get there. Musical notes, lyrics, and chords can be assembled millions of ways to create melodies. Options are many, yet killer combinations are few. Similarly, exercises, intervals, and intensity prescriptions can be combined millions of different ways to create training sessions, yet designing the right session to maximize adaptation and buy-in for each unique athlete remains a fine art. With more academic research papers and technology options available to us than we can keep up with, extracting signal from noise to improve our decision making also makes this process more challenging than ever.
The main problem with options is opinions. Everyone involved in the decision-making process inevitably has one, yet true success is defined by the response of the audience—for musicians, those who buy and enjoy their songs; for coaches, the athletes we hope will successfully adopt and execute their interventions en route to improved performance; and for sport scientists, the readers and researchers they hope will read and cite their work.
However, how many times have we published brilliant research papers in high-impact journals that no one ever read; or bought a shiny, new, fully validated technology that just gathered dust; or constructed a perfect diet plan that no one followed? Perhaps this is where we’ve gone wrong—forgetting who these things are designed to serve and what their wants, needs, and preferences truly are. In the creative world, this is why effective design always begins with empathy1: empathy for the context of the audience, understanding their biggest problems, the nuances of their environment, and the barriers that most often get in the way of adopting solutions. In our case, empathy is understanding elements like the biggest gaps in our athletes’ armory, factors outside the daily training environment that influence their behavior, and idiosyncrasies of their personalities.
As scientists and applied practitioners, we are conditioned to start designing our research studies, training programs, or athlete testing strategies by writing a plan, often grounded in the nuts and bolts of our professions—numbers, principles, and scientific rationale. Likewise, user experience designers start with their plan—an empathy map (Figure 1). These maps plot 4 categories of behavioral information (says, thinks, feels, and does) along with potential barriers (pains) and benefits (needs) associated with engaging in what is being designed, all of which help transport them into the bodies and brains of their target audience. Although these factors form part of many conversations around our research or training plans in sport science, how often do we systematically document and review them as rigorously as we would the research design of an experiment or the lengths of work:rest intervals in a conditioning session? And yet, this may be why we find ourselves wondering why an athlete has not bought into our fancy workout, even though it is fully evidence-based.
Getting to the hearts and minds of our audience requires going beyond the textbooks, the rigid scientific information, or the white papers detailing the validity of a technology. We must be involved, connected, and immersed in the experiences we seek to design for, make use of our past experiences, and “walk a mile in the shoes of our users.”
This journey starts by something as simple as immersing in a training session ourselves to understand the overall demands of what we are prescribing and the associated physiological and psychological load. Our learning then helps us calibrate the session content, volume, and intensity to the athlete’s specific context, as per our empathy map.
Similarly, as researchers, we should test and rehearse our study protocols ourselves (well-targeted pilot testing) before designing entire experiments. Pilot studies help us understand the feasibility of what looks great on paper and what the experimental burden to our participants really is. Having experience working in applied sport science also helps us select exercises and build protocols that are more representative of real-world context (demonstrating “ecological validity”), making our results more likely to be relevant and easily adopted by practitioners in the field after publication.
Finally, when thinking about employing technology, the accuracy and reliability of a device, as derived from typical validation reports, is just one part of the story. Only by testing the technology ourselves over a representative period (eg, daily across 2 consecutive weeks), can we fully appreciate the actual ease or burden of its use along with the real-life benefits (or lack thereof!) it brings. Is a device uncomfortable to wear, or do you forget it is there after a while? Does it provide useful information that positively affects behavior, or is it just interesting? Using a simple 2 × 2 matrix that plots ease of implementation against impact (Figure 2), we can quickly start to prioritize technologies worth pursuing further. For example, having a full blood panel taken daily on your athletes might provide incredibly valuable information for optimizing training decisions, but the associated burdens of time, effort, cost, and discomfort make it completely impractical and unsustainable as a regular practice.
Although textbooks, journals, and statisticians instruct us to increase our sample sizes to detect meaningful effects from interventions, self-research is invaluable for teaching us what is feasible and, therefore, truly meaningful in the real world. After all, any idea is only as useful as its execution, and 1000 measurements from 1 person tell us far more about long-term usability than 1 measurement from 1000 people. An n = 1 approach also unlocks further benefits: no time lost recruiting and convincing others to participate and full awareness of the context accompanying your data to help you make sense of it.
Self-monitoring also allows us to have our cake and eat it too, helping us adapt the best-practice approach of scientific research to what actually works “in the wild.” Although morning measurement on waking each day is the ideal state for monitoring heart-rate variability, real life does not always cooperate. After tracking heart-rate variability this way religiously for 7 years, my (MB) routine was challenged after joining pro sports and starting to travel extensively and then completely obliterated when my first child was born! Instead of abandoning heart-rate variability, these n = 1 experiences prompted me to seek a more feasible middle ground—understanding the minimum number of days of data needed to reasonably represent overall autonomic status.3 Luckily, we determined that 3 to 4 days a week may be enough, and just like that, I was back in the monitoring game.
The difference between those first 9 fully recorded versions of “Happy” and that 10th home run? You guessed it, designing with empathy.4 The smash hit was born only when Pharrell Williams realized he needed to truly embody the lead character from the movie he was creating it for (Gru in “Despicable Me”) instead of writing it from his own perspective.5 In this editorial, we have highlighted the benefits of incorporating empathy into our own practice as scientists, coaches, and researchers.6 Optimizing for human performance begins by empathizing with the human beings in front of us. As Plato famously said, “Opinion is the lowest form of knowledge for it requires no understanding. The highest form of knowledge is empathy, for it requires us to suspend our egos and live in another’s world.”
References
- 1.↑
Brown T. Change by Design: How Design Thinking Transforms Organizations and Inspires Innovation. HarperCollins; 2009.
- 2.↑
Gray D, Brown S, Macanufo J. Gamestorming: A Playbook for Innovators, Rulebreakers, and Changemakers. O’Reilly Media Inc; 2010.
- 3.↑
Plews DJ, Laursen PB, Le Meur Y, Hausswirth C, Kilding AE, Buchheit M. Monitoring training with heart-rate variability: how much compliance is needed for valid assessment? Int J Sports Physiol Perform. 2014;9(5):783–790. PubMed ID: 24334285 doi:10.1123/ijspp.2013-0455
- 5.↑
Buchheit M. Whom do we publish for? Ourselves or others? Int J Sports Physiol Perform. 2020;15(8):1057–1058. doi:10.1123/ijspp.2020-0656