Responsible Use
The Leiden Manifesto
The Leiden Manifesto for Research Metrics was published in 2015 by five experts urging responsible use in metrics, named after the conference where the idea came to fruition. They promote the following ten principles to guide research evaluation:
- Quantitative evaluation should support qualitative, expert assessment.
- Measure performance against the research missions of the institution, group, or researcher.
- Protect excellence in locally relevant research.
- Keep data collection and analytical processes open, transparent, and simple.
- Allow those evaluated to verify data and analysis.
- Account for variation by field in publication and citation practices.
- Base assessment of individual researchers on a qualitative judgment of their portfolio.
- Avoid misplaced concreteness and false precision.
- Recognize the systemic effects of assessment and indicators.
- Scrutinize indicators regularly and update them.
Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature News 520(7548), 429–431. https://doi.org/10.1038/520429a
Metric Tide Report
The Metric Tide Report, published in 2015 commissioned by the Higher Education Funding Council for England (UK), is a report of the independent review of the role of metrics in research assessment and management. Traditional metrics have long been used as indicators for research and researcher impact. Their use can be problematic when taken out of context with uncritical acceptance. Responsible metrics should be considered and understood in the following dimensions:
- Robustness - Is the metric using the best available and accurate data?
- Humility - Quantitative evaluation can complement, but not replace, expert assessment.
- Transparency - Is the collection of data and its analysis open to scrutiny?
- Diversity - Does the metric represent the landscape of research in any given field, and use appropriate indicators to support research and researchers?
- Reflexivity - Is the use of bibliometric analysis dynamic and open to change?
Metrics evoke a mixed reaction from the research community. A commitment to using data and evidence to inform decisions makes many of us sympathetic, even enthusiastic, about the prospect of granular, real-time analysis of our own activities. If we as a sector can’t take full advantage of the possibilities of big data, then who can?
Wilsdon, J., et al. (2015). The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. https://doi.org/10.13140/RG.2.1.4929.1363
SF DORA
SF DORA
The San Francisco Declaration on Research Assessment (SF DORA) recognizes the need to improve the ways in which the outputs of scholarly research are evaluated. The declaration was developed in 2012 during the Annual Meeting of the American Society for Cell Biology.
The declaration is a worldwide initiative covering all scholarly disciplines and stakeholders.
Signing the 2013 San Francisco Declaration on Research Assessment (DORA) is an important way for individuals and organizations to publicly acknowledge their commitment to improve research by strengthening research assessment.
The DORA roadmap for the next two years will focus on three strategic goals to enable signatories to take action:
- Increase awareness of the need to develop credible alternatives to the inappropriate uses of metrics in research assessment.
- Research and promote tools and processes that facilitate best practice in research assessment.
- Extend the reach and impact of DORA’s work across scholarly disciplines and in new areas of the world.
From SF DORA at www.sfdora.org.
Read the declaration at SF DORA: https://sfdora.org/read/.
Sign the declaration at SF DORA: https://sfdora.org/sign/.
Further Reading on Responsible Use: A Selected Bibliography
- Adair, S. M. (2006). Ethics in publishing: ghostwriting, conflicts of interest, and the impact factor. Pediatric Dentistry, 28(4), 309. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/16903437
- Alberts, B. (2013). Impact factor distortions. Science, 340(6134), 787. https://doi.org/10.1126/science.1240319
- Bohannon, J. (2016). Hate journal impact factors? New study gives you one more reason. Science. https://doi.org/10.1126/science.aag0643
- Charlier, P., Bridoux, V., Watier, L., Ménétrier, M., de la Grandmaison, G. L., Hervé, C. (2012). Ethics requirements and impact factor. Journal of Medical Ethics, 38(4), 253–255. https://doi.org/10.1136/medethics-2011-100174
- Curry, S. (2018). Let’s move beyond the rhetoric: it’s time to change how we judge research. Nature, 554(7691), 147. https://doi.org/10.1038/d41586-018-01642-w
- Gallagher, A. (2011). The ethics of impact factors. Nursing Ethics, 18(1), 3–5. https://doi.org/10.1177/0969733010388353
- Heyeres, M., Tsey, K., Yang, Y., Yan, L., & Jiang, H. (2018). The characteristics and reporting quality of research impact case studies: A systematic review. Evaluation and Program Planning, 73, 10–23. https://doi.org/10.1016/j.evalprogplan.2018.11.002
- Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520(7548), 429–431. https://doi.org/10.1038/520429a
- Huggett, S. (2013). Journal bibliometrics indicators and citation ethics: a discussion of current issues. Atherosclerosis, 230(2), 275–277. https://doi.org/10.1016/j.atherosclerosis.2013.07.051
- Jones, J. F. X. (2013). The impact of impact factors and the ethics of publication. Irish Journal of Medical Science, 182(4), 541. https://doi.org/10.1007/s11845-013-1014-y
- Simons, K. (2008). The misused impact factor. Science, 322(5899), 165. https://doi.org/10.1126/science.1165316
- Teixeira da Silva, J. A. (2017). The ethics of peer and editorial requests for self-citation of their work and journal. Armed Forces Medical Journal, India, 73(2), 181–183. https://doi.org/10.1016/j.mjafi.2016.11.008