Problems with citation-based metrics

First published:

Last Edited:

Number of edits:

The biggest problem with this kind of metrics is that incentivizes scientific work in fields that are ripe for receiving citations[@bhattacharya2020Stagnation and Scientific Incentives]. Now that there is a multitude of scientists working on PCR, a paper using this technique will fall into a community that will find it easy to cite and build upon. However, very early papers on PCR may have not received as many citations, just because there was no field using it.

Therefore, the benefits will not be received by the people who took risks early on, but by people who built upon and got the citations. Two examples of this are the development of PCR and of CRISPR, which were multi-decade endeavors on which a lot of preliminary work was needed for them to become successful biological tools.

I bet a lot of theoretical work falls into this pattern as well, with the difference that it tends to be much cheaper than running experiments.

Another problem with citation-based metrics is that they also generate egoistic scientific incentives, since potential collaboration will not evenly distribute citations across domains. If Scientific collaboration yields better outputs than scientific competition, would it be possible to think that citation-based metrics incentivize competition over collaboration.


Share your thoughts on this note
Aquiles Carattino
Aquiles Carattino
This note you are reading is part of my digital garden. Follow the links to learn more, and remember that these notes evolve over time. After all, this website is not a blog.
© 2021 Aquiles Carattino
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License
Privacy Policy