I wrote my previous post, Peer Review, Holy Cow, as a provocative one, of course. The reason, though, why I think the matter requires attention is that I see a problem looming: the problem of what I call 'overwhelm'. This problem looms on several levels. First of all the capacity of the pool of reviewers able and willing to deal with the growing manuscript flow – several times over, given that many articles 'cascade' down the journal pecking order and need to be peer-reviewed at every stage – is reaching its limits. And secondly, arguably more importantly, the increasing difficulty for researchers to read all that they might want to read. In many areas there is simply too much that's relevant.
In 1995 I wrote an article entitled "Keeping the Minutes of Science". I still take that phrase fairly literally. Research articles in journals are an account, for the record, of the research work done and the results achieved. Thinking along those lines (I realise that not everybody agrees with me) leads to a few consequences. 1) Articles are not primarily written for the reader-researcher, but mainly an obligation for the author-researcher (though the two groups overlap to a very large degree). The adage is 'publish-or-perish', remember? It's not 'read-or-rot'. 2) As a result, it is only proper that payment for publication comes from the side of the author. That is most fortunate, because that also makes Open Access possible. 3) 'Minutes' are often filed quickly and are not meant to be read widely. Only when there is a problem, are they retrieved and perused.
You will have spotted the paradox of, on the one hand, articles not being read widely and, on the other hand, the desirability for Open Access. Well, that is where the 'minutes' analogy falls down. Because articles do play a role in conveying the scientific knowledge and insights gained by the authors.
However, that doesn't mean that the information and knowledge bundled up in an article can be conveyed by it being read in the conventional way. This is where the problem of 'overwhelm' starts to bite. There is just too much to read for any normal researcher*. So more and more, articles need to be 'read' by computers as a help to researchers to ingesting the information they contain. That is why Open Access is so important, especially the BOAI-compliant OA that is governed by licences such as CC-BY. It makes large-scale computer-aided reading possible.
The notion of using computers for reading and ingesting large amounts of research articles means that presentation becomes unimportant. Or rather, important in different ways. The computer-readability and interoperability comes first, and visual elements such as lay-outs are practically irrelevant. In a comment by Scott Epstein to my Peer Review, Holy Cow post the point is made that visual presentation is a value-add that distinguishes publishers from outfits like ArXiv. The publishers' presentations are nice-to-have, of course, but far less 'need-to-have' than they seem to assume. Computers can deal very well with information presented in an ArXiv fashion, provided the computer interoperability is taken care of.
My personal contention is also that peer-review is largely redundant beyond a basic level that can well be taken care of by an endorsement system, once computers are involved. We shouldn't be so scared of potential 'rubbish' being included. Analyses of large amounts of information will most likely expose potential 'rubbish' in a way that makes it recognisable similar to how 'outliers' are identified in scatter graphs (and ignored, if that seems the right thing to do, in the judgment of the researcher looking at the results).
Scott also mentions 'curation'. Curation is important, but is not necessarily – or even often – done by reviewers or journal editors, in my experience. Scott seems to agree, as demonstrated by his choice of words: "journal editors also (or should) curate the content." A system that allows for (semantic) crowd-curation of scientific assertions, which subsequently could be recognised by computers in any articles in which these assertions are repeated, is likely to be a better use of available resources, with the added benefit of not relying on a small number of 'experts' but instead, using a much wider pool of potential expertise.
I would like to finish with a few quotes from Richard Smith, ex Editor of the British Medical Journal and currently Board Member of the Public Library of Science:
*See "On the Impossibility of Being Expert" by Fraser and Dunstan, BMJ Vol 341, 18-25 December 2010
In 1995 I wrote an article entitled "Keeping the Minutes of Science". I still take that phrase fairly literally. Research articles in journals are an account, for the record, of the research work done and the results achieved. Thinking along those lines (I realise that not everybody agrees with me) leads to a few consequences. 1) Articles are not primarily written for the reader-researcher, but mainly an obligation for the author-researcher (though the two groups overlap to a very large degree). The adage is 'publish-or-perish', remember? It's not 'read-or-rot'. 2) As a result, it is only proper that payment for publication comes from the side of the author. That is most fortunate, because that also makes Open Access possible. 3) 'Minutes' are often filed quickly and are not meant to be read widely. Only when there is a problem, are they retrieved and perused.
You will have spotted the paradox of, on the one hand, articles not being read widely and, on the other hand, the desirability for Open Access. Well, that is where the 'minutes' analogy falls down. Because articles do play a role in conveying the scientific knowledge and insights gained by the authors.
However, that doesn't mean that the information and knowledge bundled up in an article can be conveyed by it being read in the conventional way. This is where the problem of 'overwhelm' starts to bite. There is just too much to read for any normal researcher*. So more and more, articles need to be 'read' by computers as a help to researchers to ingesting the information they contain. That is why Open Access is so important, especially the BOAI-compliant OA that is governed by licences such as CC-BY. It makes large-scale computer-aided reading possible.
The notion of using computers for reading and ingesting large amounts of research articles means that presentation becomes unimportant. Or rather, important in different ways. The computer-readability and interoperability comes first, and visual elements such as lay-outs are practically irrelevant. In a comment by Scott Epstein to my Peer Review, Holy Cow post the point is made that visual presentation is a value-add that distinguishes publishers from outfits like ArXiv. The publishers' presentations are nice-to-have, of course, but far less 'need-to-have' than they seem to assume. Computers can deal very well with information presented in an ArXiv fashion, provided the computer interoperability is taken care of.
My personal contention is also that peer-review is largely redundant beyond a basic level that can well be taken care of by an endorsement system, once computers are involved. We shouldn't be so scared of potential 'rubbish' being included. Analyses of large amounts of information will most likely expose potential 'rubbish' in a way that makes it recognisable similar to how 'outliers' are identified in scatter graphs (and ignored, if that seems the right thing to do, in the judgment of the researcher looking at the results).
Scott also mentions 'curation'. Curation is important, but is not necessarily – or even often – done by reviewers or journal editors, in my experience. Scott seems to agree, as demonstrated by his choice of words: "journal editors also (or should) curate the content." A system that allows for (semantic) crowd-curation of scientific assertions, which subsequently could be recognised by computers in any articles in which these assertions are repeated, is likely to be a better use of available resources, with the added benefit of not relying on a small number of 'experts' but instead, using a much wider pool of potential expertise.
I would like to finish with a few quotes from Richard Smith, ex Editor of the British Medical Journal and currently Board Member of the Public Library of Science:
"...peer review is a faith-based rather than evidence-based process, which is hugely ironic when it is at the heart of science."Jan Velterop
"...we should scrap prepublication peer review and concentrate on postpublication peer review, which has always been the ‘real’ peer review in that it decides whether a study matters or not. By postpublication peer review I do not mean the few published comments made on papers, but rather the whole ‘market of ideas,’ which has many participants and processes and moves like an economic market to determine the value of a paper. Prepublication peer review simply obstructs this process."
*See "On the Impossibility of Being Expert" by Fraser and Dunstan, BMJ Vol 341, 18-25 December 2010