Wicked rules can lead to a perverse community.
In science, the wicked rule has been evaluating scientists for how much they publish, not by quality or merit. Evaluating by the “mere number of a researcher’s publications” disincentives risky and potentially ground-breaking research lacking a short-term publication outcome, as the editor of Science argues. But this is what is being done. And as a result, the scientific articles published every year have nearly tripled since 1990.
|Papers published per year. Source|
|Cumulative number of papers|
published in Biomedicine
(source: PubMed via this)
The same has happened in most countries and in most disciplines, although some have moved much faster than the average, look :-)
|Annual number of papers in Yoga studies over time.|
Does this mean that our science is now three times better? Does it mean that our understanding of Nature grows three times faster than 25 years ago? Mmm. It does mean that now we can only track a small fraction of all the papers that are relevant to our research.
To compensate for the perversion inherent to this article-count approach, evaluators started weighting that number with something called Journal Impact Factor, assuming that articles published in highly cited journals have a higher, statistically-sound chance to have an impact. To me, this is the perfect plot for a self-accomplishing prophecy; The warnings against this practice are a clamor.
As citation databases became online, another wicked parameter came on the scene:
h, the Hirsch index, was adopted 9 years ago to come over the number of publications criterion. It essentially grows linearly with your professional age, but has now become a commonplace in the evaluation of proposals. And h keeps promoting multi-authored papers beyond reason, because it disregards the number of authors or the position of the evaluated scientist in the author list (usually related to her/his relative scientific contribution to the study). The citations to an article of yours will count equally if you are the 1st or the 100-th author. Therefore a paper with 100 authors has 100 times more impact on evaluation that a single-authored paper. And I'm not being rhetorical here. Please, meet two of the highest-h scientists in Spain: 61k citations, 137 papers in 2013 only, h=112; 117k citations, 164 papers in 2013 only, h=155. I leave it to you to find the flaw.
Very predictably, the number of authors per paper has grown wild, and former research groups have often become author pools, with the entire group signing every paper published by each of its members. A symptom of this is that few researchers dare to publish on their own today:
|Average number of authors per paper.|
|% of single-authored papers|
over the last century.
|The left bar indicates the average number of articles published |
by authors that stopped publishing 15 years after their first
publication. The blue bar on the right shows the articles
published in the same timespan but by researchers that
continued publishing after 15 years. The red bar on top
indicates the articles of those same researchers after the
15th year. One can see that the researchers that continue
publishing are those having a high research output.
It also shows that the research output before the year
break is the portion that contributes most to the
overall values. Source
Ask any editor how many of their requests to review a manuscript are refused by peers, and you'll learn that they often end up doing the review themselves. Too many papers for such few reviewers/authors. It is unsurprising that you find funny bugs like this in articles that were supposed to have been reviewed.
Clearly, it is difficult to find objective (quantitative) criteria for quality. Alternatives such as interpreting the subjective impact foreseen for a given research are also risky. But there are other metrics (example), they just need to be adopted. And perhaps it is also time to question the rely on objective parameters such as h when evaluating candidates.
Consequently, young researchers are pressed to publish as much as possible instead of publishing as good as possible, not only perverting the research system but also inflating a huge publication bubble. The warning lights are long on. China has already realised the problem and may be soon taking action. Why not Europe? Will we wait until this bubble bursts?
Wicked rules pervert communities. So let's just adopt better rules.