Problems with citation-based metrics
The biggest problem with this kind of metrics is that incentivizes scientific work in fields that are ripe for receiving citations[@bhattacharya2020Stagnation and Scientific Incentives]. Now that there is a multitude of scientists working on PCR, a paper using this technique will fall into a community that will find it easy to cite and build upon. However, very early papers on PCR may have not received as many citations, just because there was no field using it.
Therefore, the benefits will not be received by the people who took risks early on, but by people who built upon and got the citations. Two examples of this are the development of PCR and of CRISPR, which were multi-decade endeavors on which a lot of preliminary work was needed for them to become successful biological tools.
I bet a lot of theoretical work falls into this pattern as well, with the difference that it tends to be much cheaper than running experiments.
Another problem with citation-based metrics is that they also generate egoistic scientific incentives, since potential collaboration will not evenly distribute citations across domains. If Scientific collaboration yields better outputs than scientific competition, would it be possible to think that citation-based metrics incentivize competition over collaboration.
Backlinks
These are the other notes that link to this one.