Let me start with a bit of context, all of
which will be known, understood and widely discussed. The blame of
unaffordability of the ever-increasing amount of scholarly literature, be it
because of high subscription prices or article processing fees for ‘gold’ open
access, is often laid at the door of the publishers.
The blame, however, should be on the
academic preoccupation with the imperative of publisher-mediated prepublication
peer review (PPR).
Of course, publishers, subscription-based
ones as well as open access outfits, have a business which depends to a very
large degree on being the organisers of PPR and few of them would like to see
the imperative disappear. The ‘need’ – real or perceived – for
publisher-mediated PPR in the academic ecosystem is the main raison d’être of most publishers. And it
is responsible for most of their costs (personnel costs), even though it is
actually carried out by academics and not publishers. The technical costs of
publishing are but a fraction of that, at least the cost of electronic
publishing (print and its distribution are quite expensive, but to be seen as
an optional service and not as part of the essence of academic publishing).
Despite it being the imperative in
Academia, publisher-mediated PPR has flaws, to say the least. Among causes for
deep concern are its anonymity and general lack of transparency, highly variable
quality, and the unrealistic expectations of what peer review can possibly deliver
in the first place. The increasing amount of journal articles being submitted
is making the process of finding appropriate reviewers not easier, either.
Originally, PPR was a perfectly rational
approach to ensuring that scarce resources were not spent on the expensive
business of printing and distributing paper copies of articles that were indeed
not deemed to be worth that expense. Unfortunately, the rather subjective
judgment needed for that approach led to unwelcome side effects, such as
negative results not being published. In the era of electronic communication,
with its very low marginal costs of dissemination, prepublication filtering
seems anachronistic. Of course, initial technical costs of publishing each
article remain, but the amounts involved are but a fraction of the costs per
article of the traditional print-based system, and an even smaller fraction of
the average revenues per article many publishers make.
Now, with the publishers’ argument of
avoiding excessive costs of publishing largely gone, PPR is often presented as
some sort of quality filter, protecting readers against unintentionally
spending their valuable time and effort on unworthy literature. Researchers
must be a naïve lot, given the protection they seem to need. The upshot of PPR
seems to be that anything that is peer reviewed before publication, and does
get through the gates, is to be regarded as proper, worthwhile, and relevant
material. But is it? Can it be taken as read that everything in peer-reviewed
publications is beyond doubt? Should a researcher be reassured by the fact that
it has passed a number of filters that purport to keep scientific ‘rubbish’
out?
Of course they should. These filtering
mechanisms are there for a reason. They diminish the need for critical
thinking. Researchers should just believe what they read in ‘approved’
literature. They shouldn’t just question everything.
Or are these the wrong answers?
Isn’t it time that academics who are
relying on PPR ‘quality’ filters – and let us hope it’s a minority of them –
should stop believing at face value what is being presented in the ‘properly
peer-reviewed and approved’ literature, and go back to the critical stance that
is the hallmark of a true scientist: “why should I believe these results or
these assertions?” The fact that an article is peer-reviewed in no way absolves
researchers of applying professional skepticism to whatever they are reading.
Further review, post-publication, remains necessary. It’s part of the
fundamentals of the scientific method.
So, what about this: a system in which
authors discuss, in-depth and critically, their manuscripts with a few people
who they can identify and accept as their peers. And then ask those people to
put their name to the manuscript as ‘endorsers’. As long as some reasonable
safeguards are in place that endorsers are genuine, serious and without
undeclared conflicts of interest (e.g. they shouldn’t be recent colleagues at
the same institution as the author, or be involved in the same collaborative
project, or have been a co-author in, say, the last five years), the value of
this kind of peer-review – author-mediated PPR, if you wish – is unlikely to be
any less than publisher-mediated PPR. In fact, it’s likely to offer more value,
if only due to transparency and to the expected reduction in the cost of
publishing. It doesn’t mean, of course, that the peer-endorsers should agree
with all of the content of the
articles they endorse. They merely endorse its publication. Steve Pettifer of the University of Manchester once
presented a perfect example of this. He showed a quote from Alan Singleton
about a peer reviewer’s report[1]:
"This is a remarkable result – in fact, I
don’t believe it. However, I have examined the paper and can find no fault in
the author’s methods and results. Thus I believe it should be published so that
others may assess it and the conclusions and/or repeat the experiment to see
whether the same results are achieved."
An
author-mediated PPR-ed manuscript could subsequently be properly published,
i.e. put in a few robust, preservation-proof formats, properly encoded with
Unicode characters, uniquely identified and identifiable, time-stamped, citable
in any reference format, suitable for human- and machine-reading, data
extraction, reuse, deposit in open repositories, printing, and everything else
that one might expect of a professionally produced publication, including a facility for post-publication commenting and review. That will cost,
of course, but it will be a fraction of the current costs of publication, be
they paid for via subscriptions, article processing charges, or subsidies. Good
for the affordability of open access publishing for minimally funded authors,
e.g. in the social sciences and humanities, and for the publication of negative
results that, though very useful, hardly get a chance in the current system.
Comments welcome.
Jan Velterop
Interesting post Jan. The system you propose is very similar to one that has been in operation at Biology Direct since it launched in 2006. Authors select reviewers from the journal's Editorial Board, who, if they choose to undertake review, contribute named comments which appear as part of the final publication.
ReplyDeleteAs you suggest, reviewers need not endorse all of the publication (in fact they are encouraged to be as critical as they feel necessary), but to contribute comments which act as a guide to the literature, and increase the transparency of the review process.
The rationale for the scheme is laid out in the launch Editorial from Editors-in-Chief Eugene Koonin, David Lipman and Laura Landweber here: http://www.biologydirect.com/content/1/1/1
More information on the process, and safeguards that are in place currently, are available here: http://www.biologydirect.com/about.
Disclosure: I am employed by BioMed Central who publish Biology Direct