Taking Steps Toward Open Data in Motor Control, Learning, and Development

Click name to view affiliation

Keith R. Lohse Program in Physical Therapy, Washington University School of Medicine, Saint Louis, MO, USA
Department of Neurology, Washington University School of Medicine, Saint Louis, MO, USA

Search for other papers by Keith R. Lohse in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0002-7643-3887 *
Free access

Data sharing is increasingly becoming a scientific norm due to its potential to accelerate discovery and reduce research waste. Despite significant investment in biomedical research, much of it fails to produce actionable knowledge, with timelines to impactful outcomes often exceeding 15 years when successful. Limited access to existing data contributes to this inefficiency, leading to redundant data collection and poorly designed studies. Governments are implementing policies that encourage data sharing for federally funded research, but all research stands to benefit from data sharing regardless of the funding source. Although data sharing practices have improved in some fields, their adoption remains minimal in areas like motor control, learning, and development. This editorial discusses the advantages of data sharing, including accelerating discovery, enhancing collaboration, and improving transparency and reproducibility. The editorial acknowledges concerns related to privacy, recognition for data sharing, and the risk of data misuse or misinterpretation, discussing productive ways to ameliorate these concerns.

Data sharing is an important facet of modern science. The benefits of data sharing are so crucial that numerous governments now make data sharing the default policy for federally funded research (e.g., Open Research Europe, n.d.; OSTP, 2022). Although there are reasonable limitations on precisely how and what data can be shared, data sharing is increasingly becoming a scientific norm to accelerate scientific discovery and reduce research waste.

Billions of dollars are spent in biomedical research every year (Macleod et al., 2014; Røttingen et al., 2013). Research is, by definition, an uncertain endeavor and many tested hypotheses are not supported. Unsurprisingly then, few studies lead to actionable new knowledge that directly advances a scientific theory or changes clinical practice; at least not immediately. Even when “successful,” the time from proof-of-principle to outcomes like commercialization or policy is estimated to be more than 15 years (Balas & Boren, 2000; Grant et al., 2003). Given these lengthy timelines, it is frightening that meta-scientific researchers estimate that greater than 50%–80%, of research funding is wasted depending on the field of biomedicine (Glasziou & Chalmers, 2018; Macleod et al., 2014), and a major source of this waste is limited access to existing data (Chan et al., 2014). This waste can either be from a “file drawer” effect in which not all data are published and available, or from down-stream effects that a lack of data can have on future studies. For instance, a lack of access to existing data can lead to expensive new data collection (“re-inventing the wheel”) and can lead to poor study design—through poor sample size planning, poor outcome selection, or poor patient stratification; all create misleading results (Chan et al., 2014; Winstein, 2018).

Although some fields have made strides in data sharing, the reuse of existing data generally remains minimal in biomedical research (Huie et al., 2018; Johnson et al., 2018; Winstein, 2018). To that end, I argue that motor control, learning, and development (MCLD) have an opportunity to step up as a field and explore how data sharing can improve our science. Although our research paradigms are diverse and our data may lack the inherent structure of something like the human genome, I think there are tremendous advantages to data sharing.

In this editorial, I will discuss three major benefits and three major concerns associated with data sharing, including how those concerns can be ameliorated. I generally argue for open data but recognize that open data is an ideal. “Open” implies that a potential user has (essentially) unrestricted access to the data, with minimal requirements other than making proper attribution to the source of the data. However, not all data can or should be open and I consider some of those issues. Along the way, I identify several resources for researchers interested in sharing their data: For example, including data sharing in informed consent, drafting data sharing plans for grants, finding a suitable digital repository for your data, and adhering to FAIR data principles.

Benefits of Open Data Sharing in Science

Accelerating Scientific Discovery

One of the most significant benefits of open data sharing is its ability to accelerate scientific discovery. When researchers have access to a wealth of data from previous studies, they can analyze and synthesize this information to generate new insights (e.g., Ferguson et al., 2014), validate findings by catching and fixing errors (e.g., Ohmann et al., 2021), and uncover patterns that may not have been apparent in individual data sets (e.g., Olafson et al., 2024). Open data thus allow scientists to build on the work of others and pool limited resources. For example, in the field of genomics, large-scale data sharing initiatives such as the Human Genome Project have led to breakthroughs in our understanding of genetic diseases, drug development, and personalized medicine (e.g., Choudhury et al., 2014). Similarly, in environmental science, pooling diverse data sets has enabled researchers to better understand the impacts of climate change and develop strategies for mitigation and adaptation (e.g., Lausch et al., 2015).

Enhancing Collaboration and Interdisciplinary Research

Open data sharing fosters collaboration and interdisciplinary research by breaking down barriers between different scientific teams, institutions, and disciplines. This collaboration can either occur from the top-down or the bottom-up. For instance, we have seen “top down” coordinated efforts in genetics, where underpowered and irreplicable studies of candidate genes have generally given way to consortia who plan large-scale data collection and leverage genome wide methods (e.g., Border et al., 2019; Johnson et al., 2017). Data sharing in MCLD may not require research consortia or coordination on so grand of a scale, although see efforts like the Psychological Science Accelerator or the “ManyLabs” projects from the Open Science Framework. Instead, MCLD may benefit from simpler “bottom-up,” post hoc data sharing. Individuals independently sharing data can also facilitate collaboration and interdisciplinary research, without the need for coordination between many large groups. For instance, open data from past work can help trainees master data processing and analysis to recreate seminal findings, before applying the methods in their own work. Data from past work can also help a researcher design a better study, yielding more accurate sample size calculations or identifying key moderators that make effects stronger/weaker (Chan et al., 2014; Glasziou & Chalmers, 2018).

Improving Transparency and Reproducibility

Transparency and reproducibility are fundamental principles of scientific research, as exemplified by the motto of the British Royal Society, Nullius in Verba—“on no one’s word.” Similarly, the National Institutes of Health (NIH) argue for enhancing reproducibility through transparency, “When a result can be reproduced by multiple scientists, it validates the original results and readiness to progress to the next phase of research” (“Enhancing Reproducibility Through Rigor and Transparency,” 2023). Here, I use the term reproducibility to mean achieving the same result using the same methods and data, see Table 1, which makes it distinct from replicability in which a qualitatively similar result is achieved using the same methods in new data (Schloss, 2018; Whitaker, 2017). Open data thus also make it easier to rerun the analyzes of a given study ensuring that results are at a minimum reproducible and ideally even analytically robust to changes in the analytic plan, tests with different assumptions, or the exclusion of participants. Ensuring reproducibility and testing robustness increases confidence in the data before moving on to the next phase of research.

Table 1

Disambiguating Terms of Reproducibility, Robustness, Replicability, and Generalizability

If the data are...SameDifferent
and methods are...SameDifferentSameDifferent
and the result is...SameSameSameSame
Then the findings are...ReproducibleRobustReplicableGeneralizable

Note that “same” is not meant to imply numerically identical, but similar within some tolerance.

Concerns and Limitations of Open Data Sharing

Privacy and Ethical Considerations for Participants

One of the primary concerns associated with open data sharing is the potential for breaches of privacy and ethical concerns around data security, particularly when dealing with sensitive information. It is thus critically important that if any data are going to be shared, participants are aware of how and what data might be shared during informed consent (NOT-OD-21-013: Final NIH Policy for Data Management and Sharing, n.d., p. 21). Openly available data must be deidentified (e.g., adhering to Health Insurance Portability and Accountability Act [HIPAA] “Safe Harbor” guidelines) and generally regarded as low risk, so that even if a participant’s identity could be reconstructed, there is minimal risk if their data are identified. However, it is also important to note that data can be efficiently shared without being open.

There are many situations in which it is desirable to have at least some restrictions on data access, typically managed by the owner of a data repository. For instance, data repositories at the NIH in the United States require applications to access the data and have different stipulations on how the data can be used (e.g., data generally cannot be used for commercial purposes). Thus, although open data may be a laudable ideal, there are many situations where restrictions on access are reasonable and necessary. I encourage interested researchers to explore the NIH resources, for example, language to include in consent documents and how to draft a data sharing plan that balances openness with data security (Data Management and Sharing Policy | Data Sharing, n.d.).

Recognition for Data Sharing and Intellectual Contributions

The 2023 State of Open Data survey showed that over the last 8 years, researchers remain concerned about not receiving sufficient credit for openly sharing their data (Hahnel et al., 2023). This concern can take multiple forms. For instance, researchers worry that if data are shared as a stand-alone product with a digital object identifier, this will not carry the same weight as peer-reviewed theory-driven work based on those data. Related to that anxiety is the fear of “getting scooped” by other researchers; where a different research team will use the available data to test a hypothesis the original authors were interested in exploring in the future.

However, authors are generally not required to share all of the data collected in a project. For example, under the NIH Data Management and Sharing policy, researchers should “maximize the appropriate sharing of scientific data, which is defined as data commonly accepted in the scientific community as being of sufficient quality to validate and replicate the research findings” (Data Management and Sharing Policy | Data Sharing, n.d.). Thus, the spirit of these guidelines is generally around analytical reproducibility, and therefore data only need to be shared in a format sufficient to recreate the analyses in a published paper. Thus, authors can share sufficient data to make their analyses reproducible, while retaining the raw data they intend to explore in other aims of a project.

A final fear in this regard is that a new team of researchers may use the data with proper attribution, but not include the original researchers as co-authors of the new paper. That is, the data are successfully reused by a new team, but there is no collaboration between the new team and the old team. Unless coauthorship was somehow stipulated in a data use agreement, I think this is fair play by the new researchers. If someone can use my data in a creative way I did not think of, I still get recognition for my contribution of the data in the form of a citation, not as a coauthor. However, I do think community is a widely held scientific value (Merton, 1973) and in a practical sense, it really helps to have at least one of the original authors on your team (they probably know the data very well!), so my personal fear of someone wantonly using data I helped collect is low. I won’t presume to speak for everyone, however, and if you have concerns about how people might use data, you can always share the minimal amount of data to make it analytically reproducible and manage access through different repositories with different permissions.

Misuse, Misinterpretation, and Muddy Waters

Another concern with open data sharing is the potential for misuse or misinterpretation of data. When data are made publicly available, it can be accessed by individuals or groups who may not have the necessary expertise to analyze it correctly. A concern is that this can lead to the spread of misinformation, flawed conclusions, or the misuse of data for purposes that were not intended by the original researchers. Depending on the sensitivity of the data, researchers can work with data repositories that have greater restrictions on access, ensuring that data only end up in the hands of qualified professionals. The journal Scientific Data maintains an excellent guide on both generalist and specialist data repositories (Data Repository Guidance | Scientific Data, n.d.).

I think the risk of misuse is relatively low in MCLD research. I don’t think we have any analogues to climate change deniers trying to cherry-pick ecological/meteorological data, or anti-vaccine quacks digging through the Food and Drug Administration’s Adverse Event Reporting System. I think the risk of misinterpretation is greater in our field. For instance, I personally possess the technical capacity to process and analyze biomechanical data, but I couldn’t give you a theoretically driven interpretation of what it means, and I would probably get some very basic facts wrong. That is, however, where peer-review comes in as a (admittedly leaky) filter to control quality in publishing. I may be able to fool myself that I understand biomechanics, but I doubt I could fool genuine experts. So, although misinterpretation is a concern, I am most concerned about open data muddying the waters in our field by lowering the cost of doing research.

Collecting data is difficult and time consuming and thus researchers (and funders) generally only want to invest resources into a project that they think will be important. Open data reduces this cost, and as a result we may see an explosion of ill-conceived secondary analyses. Systematic reviews and meta-analysis are an illustrative example of this phenomenon; as more authors learned they could publish papers without having to collect data, there was an increase in low quality and often redundant systematic reviews (Ioannidis, 2016). However, I think we can address this concern in two ways: one is practical and focused on data standards; one is ethical and focused on individual action.

First, having more data will only muddy the waters if those data are disorganized. To borrow from Forscher’s “Chaos in the Brickyard” allegory, having more bricks (data) to build your structure (test your theory) is only a problem when bricks are randomly made at a high volume, leading to piles of bricks being valued above a carefully constructed edifice (what Forscher saw as the proliferation of low quality experimental work; Forscher, 1963). However, if (meta)data are rigorously collected and archived according to the principles of Findability, Accessibility, Interoperability, and Re-usability (i.e., FAIR data principles; Wilkinson et al., 2016), then data are easy to find, understand, and use to the best of our ability. That is, we could have an easily searchable catalogue of existing bricks (findable and accessible data) that you can repurpose to build your house (interoperability) without the need to commission a bunch of new bricks to be made (reducing waste through reuse).

Second, although having easier access to more data may lower the cost of doing a study, that does not necessarily lead to more low-quality work. While I acknowledge the role that incentives play in shaping individual action, I also believe that we are all autonomous agents with a reasonable degree of free will. If we value quality, rigor, and impact above quantity, publication, and notoriety, then open data allow us to do more thorough research, not less. As outlined in Table 1, having access to high-quality data can help us test the reproducibility and robustness of existing data and perhaps even the replicability and generalizability of a finding if sufficient data are available. As Douglas Altman (1994) eloquently put it, “We need less research, better research, and research done for the right reasons.” I whole heartedly agree, and I see data sharing as in line with that statement, not opposed to it.

Acknowledgments

Lohse is an educational leadership team member for “Reproducible Rehabilitation” NIH/NICHD R25HD105583-01A1. Lohse would like to thank Matthew Miller, PhD, and James Finley, PhD, for comments on an earlier draft of this editorial.

References

  • Altman, D.G. (1994). The scandal of poor medical research. BMJ, 308(6924), 283284.

  • Balas, E.A., & Boren, S.A. (2000). Managing clinical knowledge for health care improvement. Yearbook of Medical Informatics, 9(1), 6570.

  • Border, R., Johnson, E.C., Evans, L.M., Smolen, A., Berley, N., Sullivan, P.F., & Keller, M.C. (2019). No support for historical candidate gene or candidate gene-by-interaction hypotheses for major depression across multiple large samples. American Journal of Psychiatry, 176(5), 376387.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chan, A.-W., Song, F., Vickers, A., Jefferson, T., Dickersin, K., Gøtzsche, P.C., Krumholz, H.M., Ghersi, D., & van der Worp, H.B. (2014). Increasing value and reducing waste: Addressing inaccessible research. The Lancet, 383(9913), 257266.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Choudhury, S., Fishman, J.R., McGowan, M.L., & Juengst, E.T. (2014). Big data, open science and the brain: Lessons learned from genomics. Frontiers in Human Neuroscience, 8, 239.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Data management and sharing policy | Data sharing. (n.d.). Retrieved August 26, 2024, from https://sharing.nih.gov/data-management-and-sharing-policy

    • Search Google Scholar
    • Export Citation
  • Data repository guidance | Scientific data. (n.d.). Retrieved August 26, 2024, from https://www.nature.com/sdata/policies/repositories#general

    • Search Google Scholar
    • Export Citation
  • Enhancing reproducibility through rigor and transparency. (2023). https://grants.nih.gov/policy/reproducibility/index.htm

  • Ferguson, A.R., Nielson, J.L., Cragin, M.H., Bandrowski, A.E., & Martone, M.E. (2014). Big data from small data: Data-sharing in the “long tail” of neuroscience. Nature Neuroscience, 17(11), 14421447.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Forscher, B.K. (1963). Chaos in the Brickyard. Science, 142(3590), 339.

  • Glasziou, P., & Chalmers, I. (2018). Research waste is still a scandal—An essay by Paul Glasziou and Iain Chalmers. BMJ, 363, Article k4645.

  • Grant, J., Green, L., & Mason, B. (2003). Basic research and health: A reassessment of the scientific basis for the support of biomedical science. Research Evaluation, 12(3), 217224.

    • Search Google Scholar
    • Export Citation
  • Hahnel, M., Smith, G., Schoenenberger, H., Scaplehorn, N., & Day, L. (2023). The state of open data 2023 [Report]. Digital Science.

  • Huie, J.R., Almeida, C.A., & Ferguson, A.R. (2018). Neurotrama has a big-data problem. Current Opinion in Neurology, 31(6), 702708.

  • Ioannidis, J.P.A. (2016). The mass production of redundant, misleading, and conflicted systematic reviews and meta‐analyses. The Milbank Quarterly, 94(3), 485514.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Johnson, E.C., Border, R., Melroy-Greif, W.E., de Leeuw, C.A., Ehringer, M.A., & Keller, M.C. (2017). No evidence that schizophrenia candidate genes are more associated with schizophrenia than noncandidate genes. Biological Psychiatry, 82(10), 702708.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Johnson, J.N., Hanson, K.A., Jones, C.A., Grandhi, R., Guerrero, J., & Rodriguez, J.S. (2018). Data sharing in neurosurgery and neurology journals. Cureus, 10(5), Article e2680.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lausch, A., Schmidt, A., & Tischendorf, L. (2015). Data mining and linked open data—New perspectives for data analysis in environmental research. Ecological Modelling, 295, 517.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Macleod, M.R., Michie, S., Roberts, I., Dirnagl, U., Chalmers, I., Ioannidis, J.P.A., Salman, R.A.-S., Chan, A.-W., & Glasziou, P. (2014). Biomedical research: Increasing value, reducing waste. The Lancet, 383(9912), 101104.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Merton, R.K. (1973). The sociology of science: Theoretical and empirical investigations. University of Chicago Press.

  • NOT-OD-21-013: Final NIH Policy for Data Management and Sharing. (n.d.). Retrieved August 13, 2024, from https://grants.nih.gov/grants/guide/notice-files/NOT-OD-21-013.html

    • Search Google Scholar
    • Export Citation
  • Ohmann, C., Moher, D., Siebert, M., Motschall, E., & Naudet, F. (2021). Status, use and impact of sharing individual participant data from clinical trials: A scoping review. BMJ Open, 11(8), Article e049228.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Olafson, E.R., Sperber, C., Jamison, K.W., Bowren, M.D., Jr., Boes, A.D., Andrushko, J.W., Borich, M.R., Boyd, L.A., Cassidy, J.M., Conforto, A.B., Cramer, S.C., Dula, A.N., Geranmayeh, F., Hordacre, B., Jahanshad, N., Kautz, S.A., Tavenner, B.P., MacIntosh, B.J., Piras, F., ... Kuceyeski, A.F. (2024). Data-driven biomarkers better associate with stroke motor outcomes than theory-based biomarkers. Brain Communications, 6(4), Article fcae254.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Open Data, Software and Code Guidelines | Open Research Europe. (n.d.). Retrieved August 12, 2024, from https://open-research-europe.ec.europa.eu/for-authors/data-guidelines

    • Search Google Scholar
    • Export Citation
  • OSTP Issues Guidance to Make Federally Funded Research Freely Available Without Delay | OSTP. (2022, August 25). The White House. https://www.whitehouse.gov/ostp/news-updates/2022/08/25/ostp-issues-guidance-to-make-federally-funded-research-freely-available-without-delay/

    • Search Google Scholar
    • Export Citation
  • Røttingen, J.-A., Regmi, S., Eide, M., Young, A.J., Viergever, R.F., Årdal, C., Guzman, J., Edwards, D., Matlin, S.A., & Terry, R.F. (2013). Mapping of available health research and development data: What’s there, what’s missing, and what role is there for a global observatory? The Lancet, 382(9900), 12861307.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Schloss, P.D. (2018). Identifying and overcoming threats to reproducibility, replicability, robustness, and generalizability in microbiome research. mBio, 9(3), Article e00525-18.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Whitaker, K. (2017, September 26). Publishing a reproducible paper [Presentation]. figshare.

  • Wilkinson, M.D., Dumontier, M., Jan Aalbersberg, I.J., Appleton, G., Axton, M., Baak, A., Blomberg, N., Boiten, J.-W., da Silva Santos, L.B., Bourne, P.E., Bouwman, J., Brookes, A.J., Clark, T., Crosas, M., Dillo, I., Dumon, O., Edmunds, S., Evelo, C.T., Finkers, R., ... Mons, B. (2016). The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data, 3(1), Article 1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Winstein, C. (2018). Thoughts about the negative results of clinical trials in rehabilitation medicine. Kinesiology Review, 7(1), 5863.

    • Search Google Scholar
    • Export Citation

Lohse (lohse@wustl.edu) is corresponding author, https://orcid.org/0000-0002-7643-3887

  • Collapse
  • Expand
  • Altman, D.G. (1994). The scandal of poor medical research. BMJ, 308(6924), 283284.

  • Balas, E.A., & Boren, S.A. (2000). Managing clinical knowledge for health care improvement. Yearbook of Medical Informatics, 9(1), 6570.

  • Border, R., Johnson, E.C., Evans, L.M., Smolen, A., Berley, N., Sullivan, P.F., & Keller, M.C. (2019). No support for historical candidate gene or candidate gene-by-interaction hypotheses for major depression across multiple large samples. American Journal of Psychiatry, 176(5), 376387.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chan, A.-W., Song, F., Vickers, A., Jefferson, T., Dickersin, K., Gøtzsche, P.C., Krumholz, H.M., Ghersi, D., & van der Worp, H.B. (2014). Increasing value and reducing waste: Addressing inaccessible research. The Lancet, 383(9913), 257266.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Choudhury, S., Fishman, J.R., McGowan, M.L., & Juengst, E.T. (2014). Big data, open science and the brain: Lessons learned from genomics. Frontiers in Human Neuroscience, 8, 239.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Data management and sharing policy | Data sharing. (n.d.). Retrieved August 26, 2024, from https://sharing.nih.gov/data-management-and-sharing-policy

    • Search Google Scholar
    • Export Citation
  • Data repository guidance | Scientific data. (n.d.). Retrieved August 26, 2024, from https://www.nature.com/sdata/policies/repositories#general

    • Search Google Scholar
    • Export Citation
  • Enhancing reproducibility through rigor and transparency. (2023). https://grants.nih.gov/policy/reproducibility/index.htm

  • Ferguson, A.R., Nielson, J.L., Cragin, M.H., Bandrowski, A.E., & Martone, M.E. (2014). Big data from small data: Data-sharing in the “long tail” of neuroscience. Nature Neuroscience, 17(11), 14421447.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Forscher, B.K. (1963). Chaos in the Brickyard. Science, 142(3590), 339.

  • Glasziou, P., & Chalmers, I. (2018). Research waste is still a scandal—An essay by Paul Glasziou and Iain Chalmers. BMJ, 363, Article k4645.

  • Grant, J., Green, L., & Mason, B. (2003). Basic research and health: A reassessment of the scientific basis for the support of biomedical science. Research Evaluation, 12(3), 217224.

    • Search Google Scholar
    • Export Citation
  • Hahnel, M., Smith, G., Schoenenberger, H., Scaplehorn, N., & Day, L. (2023). The state of open data 2023 [Report]. Digital Science.

  • Huie, J.R., Almeida, C.A., & Ferguson, A.R. (2018). Neurotrama has a big-data problem. Current Opinion in Neurology, 31(6), 702708.

  • Ioannidis, J.P.A. (2016). The mass production of redundant, misleading, and conflicted systematic reviews and meta‐analyses. The Milbank Quarterly, 94(3), 485514.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Johnson, E.C., Border, R., Melroy-Greif, W.E., de Leeuw, C.A., Ehringer, M.A., & Keller, M.C. (2017). No evidence that schizophrenia candidate genes are more associated with schizophrenia than noncandidate genes. Biological Psychiatry, 82(10), 702708.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Johnson, J.N., Hanson, K.A., Jones, C.A., Grandhi, R., Guerrero, J., & Rodriguez, J.S. (2018). Data sharing in neurosurgery and neurology journals. Cureus, 10(5), Article e2680.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lausch, A., Schmidt, A., & Tischendorf, L. (2015). Data mining and linked open data—New perspectives for data analysis in environmental research. Ecological Modelling, 295, 517.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Macleod, M.R., Michie, S., Roberts, I., Dirnagl, U., Chalmers, I., Ioannidis, J.P.A., Salman, R.A.-S., Chan, A.-W., & Glasziou, P. (2014). Biomedical research: Increasing value, reducing waste. The Lancet, 383(9912), 101104.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Merton, R.K. (1973). The sociology of science: Theoretical and empirical investigations. University of Chicago Press.

  • NOT-OD-21-013: Final NIH Policy for Data Management and Sharing. (n.d.). Retrieved August 13, 2024, from https://grants.nih.gov/grants/guide/notice-files/NOT-OD-21-013.html

    • Search Google Scholar
    • Export Citation
  • Ohmann, C., Moher, D., Siebert, M., Motschall, E., & Naudet, F. (2021). Status, use and impact of sharing individual participant data from clinical trials: A scoping review. BMJ Open, 11(8), Article e049228.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Olafson, E.R., Sperber, C., Jamison, K.W., Bowren, M.D., Jr., Boes, A.D., Andrushko, J.W., Borich, M.R., Boyd, L.A., Cassidy, J.M., Conforto, A.B., Cramer, S.C., Dula, A.N., Geranmayeh, F., Hordacre, B., Jahanshad, N., Kautz, S.A., Tavenner, B.P., MacIntosh, B.J., Piras, F., ... Kuceyeski, A.F. (2024). Data-driven biomarkers better associate with stroke motor outcomes than theory-based biomarkers. Brain Communications, 6(4), Article fcae254.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Open Data, Software and Code Guidelines | Open Research Europe. (n.d.). Retrieved August 12, 2024, from https://open-research-europe.ec.europa.eu/for-authors/data-guidelines

    • Search Google Scholar
    • Export Citation
  • OSTP Issues Guidance to Make Federally Funded Research Freely Available Without Delay | OSTP. (2022, August 25). The White House. https://www.whitehouse.gov/ostp/news-updates/2022/08/25/ostp-issues-guidance-to-make-federally-funded-research-freely-available-without-delay/

    • Search Google Scholar
    • Export Citation
  • Røttingen, J.-A., Regmi, S., Eide, M., Young, A.J., Viergever, R.F., Årdal, C., Guzman, J., Edwards, D., Matlin, S.A., & Terry, R.F. (2013). Mapping of available health research and development data: What’s there, what’s missing, and what role is there for a global observatory? The Lancet, 382(9900), 12861307.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Schloss, P.D. (2018). Identifying and overcoming threats to reproducibility, replicability, robustness, and generalizability in microbiome research. mBio, 9(3), Article e00525-18.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Whitaker, K. (2017, September 26). Publishing a reproducible paper [Presentation]. figshare.

  • Wilkinson, M.D., Dumontier, M., Jan Aalbersberg, I.J., Appleton, G., Axton, M., Baak, A., Blomberg, N., Boiten, J.-W., da Silva Santos, L.B., Bourne, P.E., Bouwman, J., Brookes, A.J., Clark, T., Crosas, M., Dillo, I., Dumon, O., Edmunds, S., Evelo, C.T., Finkers, R., ... Mons, B. (2016). The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data, 3(1), Article 1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Winstein, C. (2018). Thoughts about the negative results of clinical trials in rehabilitation medicine. Kinesiology Review, 7(1), 5863.

    • Search Google Scholar
    • Export Citation
All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 2019 2019 147
PDF Downloads 440 440 22