Wednesday, January 11, 2012

Holy Cow, Peer Review

Looking at it as dispassionately as possible, one could conclude that peer review is the only remaining significant raison d’être of formal scientific publishing in journals. Imagine that scientists, collectively, decided that sharing results were of paramount importance (a truism), but peer-review isn't considered important any longer. If you imagine that, then the whole publishing edifice would suddenly look very different. More like ArXiv (where, by the way, I found this interesting article).

A recent report estimates that the “total revenues for the scientific, technical and medical publishing market are estimated to rise by 15.8% over the next three years – from $26bn in 2011 to just over $30bn in 2014.” If we assume an annual output of 1 million articles, this revenue – which, for practical purposes, equals the cost to science of access to research publications – equates to a cost of $3000 per article, and even if the output is 1.5 million articles, it’s still $2000 per article.

So the real question is: is peer review worth that much? It’s not that peer review might not have benefits at all. At issue is the cost to science of such benefits as there may be. And although post-publication peer review could easily be done, by those who feel the inclination to do so, when and where it seems to be worth the effort, it may not happen very often, of course, as there are few incentives. Isn't the endorsement system a viable alternative?

Of course, ArXiv-oid publishing platforms also carry a cost, but per article it’s likely to be only a small fraction of the amounts mentioned above. In the case of ArXiv it is about $7 per article, each of which is also completely Open Access. Seven dollars! That’s the size of a rounding error on the amounts of $2000 - $3000.

Peer review made sense in an era when publishing necessarily claimed expensive resources, such as paper to print on, physical distribution, shelf space in libraries, et cetera. One had to be careful and spend those resources on articles that were likely to be worth it, and even then restrict what was spent on individual articles by imposing maximum lengths and the like. Also, finding the articles worth reading was difficult and the choices and guidance journal editors and editorial boards made were welcome.

How all this has changed with the advent of the Web. There is hardly any need for restrictions on the number and length of articles anymore, and searching – not to mention finding – articles that are relevant to the specific project a researcher is working on has become dramatically easier. As a result, the filtering and selecting functions of journals have become rather redundant.

“All very well, but what about the quality assurance that peer review provides?” Well, it is debatable that peer review does that reliably, though I’m willing to accept that it might. However, given its costs, can we really not deal with a lack of this quality assurance in the light of the benefits of universal and inexpensive Open Access that ArXiv-oid platforms could bring? Are we not dealing with it right now? We all know that almost all articles eventually meet their accepting journal editor, and it’s difficult to imagine that every article we find with a literature web search is of sufficient ‘quality’ (whatever that means anyway) for our purposes. And yes, we will encounter ‘rubbish’ articles. Don’t we now, with nigh universal peer review? But we deal with outliers in data all the time, and it is my conviction that we can deal with outliers in the literature just as well. Anyway, ArXiv-oid platforms with an endorsement system will to a large degree prevent excesses.

Scientists are people, and as such not too well equipped to make completely rational choices. Besides, the ‘ego-system’ of qualifying for grants, tenure, et cetera, has it’s own rationality (akin to how to deal with the prisoner's dilemma). But the prospect of being able to save tens of billions of dollars each year, even after allowing generous sums for running ArXiv-oids with endorsement systems instead of peer review, which savings could be used for research (the amounts saveable are not far off the annual NIH research budget!), must be food for some serious thought. Let's see if we can think this through. It's not fair to expect scientists themselves to break the cycle. But funding bodies?

I realise that what I'm proposing here is the 'furthest point', but that's where we have to hook up the tightrope, if we want to be able to traverse the chasm separating today from what might be, no?

Jan Velterop


  1. I share your interest in peer review and changing distribution systems. Just for the record, you need an extra 0 in your costs for article distribution. $30 billion STM revenues divided by 1.5 million articles would mean $20,000 per article. Though one thing you want to bear in mind is that academic papers are about 55% of the total market for STM revenues. See the link at the bottom of this comment for a STM market analysis report. I wrote a blog post here about some thoughts on the journal system:

    STM market analysis report

    Richard Price, Founder,

    1. Thank you, Richard, for peer-reviewing my blog post, since that is effectively what you have done. You are right, I did drop a zero (to my eternal shame). So with that zero restored, and the total STM journal revenues corrected to about half of the mentioned total of $30 billion, that makes the price we collectively pay for each published peer-reviewed article something in the order of $10,000 (more if the number of articles published each year is actually less than 1.5 million).

      Your correction of my mistake makes it clear that the real difference between a system based on pre-publication peer review and one based on an endorsement system like ArXiv's, is much greater – about 5 times greater than I originally surmised.


    2. Looking at these figures again, and applying the 'sniff test' (intuition), the $10,000 per article is wrong. yet I did make a mistake in my calculation, so it must mean that either the $30 billion mentioned in The Bookseller is referring to very much more more than the revenues of STM journal publishing, or (and maybe 'and') the estimate of 1.5 million new articles a year is way too low. My intuition tells me the real cost to Academia of the average peer reviewed published article is much closer to the $2000 I first mentioned than to the $10,000 that comes out of the calculation. The core of the argument stands, though, and that is that the cost of pre-publication peer review publishing is very much higher, per article, than what is realistically possible were we to have a more widespread endorsement system similar to the one ArXiv employs.

  2. Jan,

    I'm sure you know that paper, printing, and binding were never the majority of the costs, even way back when I started in STM publishing in the early 1990s, even when Mosaic was new.

    If peer and editorial review were simply "checking things over," then you might be right. However, journal editors also (or should) curate the content.

    Can researchers do this for themselves? Perhaps (although I have friends who are STM librarians who might debate that...:)).

    Given the mass of material being written—do they have the time to sort through it all themselves? Do they want to? Maybe it sounds appealing at first, but I was also reading posts today about consumer publishing, with readers complaining about having to wade through a huge mass of stuff that's not worth reading.

    Wouldn't it be more time-efficient to divide the labor? Have people who spend a large amount of time evaluating and sorting the material? And the more you do it, the more you become expert at it (CF: Gladwell...)

    I'd suggest: Subjective judgment by people looked to FOR those judgments is one of the main value-adds of editorial work. (And if people are doing this as a profession, they need to earn a living at it.)

    Another main value-add is presentation—in the Internet world, that's Platform (and findability). Compare the presentation the Arxiv with, say, the site aimed at commercial (non-Academic) Research and Development that Springer has been developing (Springer R&D). There's a lot of work done by a lot of developers, working full-time, to get even to this beta version.

    Scott Epstein
    (The above are my opinions only, and not Springer's.)

    1. Thanks for your comments, Scott. I wrote my post as a provocative one, of course. The reason why I think the matter requires attention is that I see a problem looming: the problem of what I call 'overwhelm'. Instead of dealing with that here, I have witten a new post: The problem of 'Overwhelm'.

    2. Sorry, Scott, but I consider this "curation" to be of net negative value. If we go ahead and publish everything, I can pick what I want to read, using a combination of searching, personal recommendation and intelligent agents. It can be tailored to my own unique interest. By contrast, whatever choice an editor makes on behalf of all his readers can never be better than a compromise for me. having humans do this for constituencies is silly when machines can do it for individuals.


    Shouldn't we first free access online to peer-reviewed papers before we think of freeing the papers from peer review? (Otherwise we might regret it, and peer review may simply have to be re-invented.)

    1. Stevan, thanks for your comment. I'm a fan of parallel processing, and in this case, parallel thinking. Opening access to peer-reviewed paper should just go on – and if at all possible, accelerate – while we (some of us) think further ahead. I'm talking about pre-publication peer-review anyway. Scientific journal literature, including Open Access, green or gold, will be affected by the growing 'overwhelm' and the burden that puts on pre-publication peer-reviewers.

  4. We obsess over peer review the same way factories obsess over quality control. Our peer review mechanism is straight out of the industrial view of the world. What you describe is a post-industrial approach where we rely more on advanced computer technology, and less on linear and predictable processes.

    The idea is not new... See this older post of mine which refers back to an even older reference:

    Become independent of peer review

  5. Are you tired by unfair peer review process? Read this article:

  6. "Looking at it as dispassionately as possible, one could conclude that peer review is the only remaining significant raison d’être of formal scientific publishing in journals."


    Imprimatur is the only value being offered by old-guard publishers. Imprimatur is the implied endorsement received by authors who publish in certain scientific journals, particularly in those that earned a high level of prestige during the pre-digital period of publication scarcity:

    Want to change the system? Your solution must have a plan for dealing with this problem, because all players (authors, funding agencies, tenure review committees, libraries, and readers) have a stake in maintaining this aspect of the status quo.