Wednesday, October 08, 2014

Journals of Nature and Science

Joe Esposito's recent post on the Scholarly Kitchen prompted me to post the following proposal, which I have discussed with various people, but which has no takers yet. But who knows what the future holds...

I called the proposed system JONAS (for 'Journals Of Nature And Science' – working title, obviously). It is, I think, a new approach to open access publication of peer-reviewed scientific literature. If it isn't I've missed something (entirely possible).

JONAS is about establishing a publishing system that addresses:
  • Open access,
  • Fair and efficient peer review, 
  • Cost of publishing
  • Speed of publishing
  • Publication of negative/null results

Open Access — The JONAS publishing system focuses on the superb technical publication, in various formats/versions, of peer-reviewed articles for optimal machine and human readability and re-use.

Fair and efficient peer review — Anonymous peer review has problems around issues of transparency, fairness, thoroughness, speed, publisher-bias, specious requests for further experiments or data, and possibly more. JONAS is a system using signed, pre-publication peer review, arranged by the author(s) (many publishers ask authors who to invite to review their papers anyway), and merely verified by the publisher (peer review by endorsement). Reviews would be open, published with the article that’s endorsed, non-anonymous, under the rules that peer-endorsers must be active researchers, and not be, or for at least five years have been, at the same institution as, or a co-author of, any of the authors. Such a peer-review-by-endorsement system is likely to be at least as good as, and quite probably better than, the currently widespread ‘black box’ of anonymous peer review. As reviews/endorsements would be signed and non-anonymous, there is very little danger of sub-standard articles being published (not worse than is currently the case anyway), as endorsers/reviewers would not want to put their reputations at risk. The review process between authors and endorsers is likely to be iterative, resulting in improvements on the original manuscripts. “Author-arranged” may perhaps include peer review being arranged on behalf of the authors by services specifically set up for that purpose, as long as the reviewers are not anonymous and conform to the JONAS rules. The LIBRE service is one example (currently in prototype).

Cost of publishing — A system like this can be very cost-effective for authors. The technical costs of proper publishing are but a fraction of the cost usually quoted for organizing and arranging peer review. First indication is that an amount in the order of £100-150 per article can be sustainable, given sufficient uptake. Tiered charges should be considered depending on the state of the manuscript when submitted. If the manuscript needs very little work to bring it up to proper publishing standards, or if the author doesn’t want or need those services, the cost could be very low indeed.

Speed of publishing — Since the peer-review-by-endorsement process has already taken place before the article arrives at the publisher, publication can ensue within days, even hours, depending on the state of the manuscript.

Requirements for manuscripts: ORCIDs for authors and reviewers/endorsers; inclusion of (permanent links to) datasets used, underlying data for graphs, a section “details for replicability and reproducibility” with clear and unambiguous identification of materials used, including reagents, software and other non-standard tools and equipment.
 
Input: Properly endorsed articles to be accepted in the form of Word, Pages, (LA)TEX, XML, HTML, Markdown, and Excel or CSV for data, and high-resolution image files (where possible scalable vector graphics) attached to emails or via a simple upload site.

Output: Articles would be published as XML, HTML, PDF, ODF and ePub formats, as much as possible semantically enriched and aesthetically formatted, plus Excel/CSV for data (tables extractable and rendered in Excel from PDFs with the software to do that, Utopia Documents, freely supplied).

Commenting and post-publication review (signed comments and reviews only) would be encouraged for all articles, links to comments to be provided with each article. Comments may be made on different sites, and would be linked to, if that is the case. Anonymous comments would be ignored.

Access Licences: CC-BY or CC0 — DOIs for the articles, and where appropriate for individual elements within articles, would be assigned/arranged by JONAS.

The core of the JONAS system would effectively be to have OA journals with a low-cost structure, with superb and highly optimized technical quality of the published articles. The principal difference with other OA journals would be the pre-arranged open peer review ("peer-review-by-endorsement"), organised by the authors themselves, according to a set of rules that ensures a reasonable level of assurance against reviewer bias (because of its openness and non-anonymity, actually more assurance than is provided in the usual anonymous peer review as widely practiced). Since arranging peer review is one of the major costs of any publisher (mostly staff costs), leaving that part of the publishing process in the hands of researchers and the academic community can make a great difference to the cost of publication. So far, efforts to reduce the cost of publishing have been concentrated on technical issues. Changing the mechanism (emphatically not the principle) of peer review offers much greater scope for cost reduction.

What JONAS' job would be is to take such peer-endorsed articles and make them into professionally published and complete (including data and metadata) documents, adhering to all the technical, presentational and unique identifier standards, in a number of formats, linked and linkable to databases and other relevant information, human- and machine-readable and suitable for widespread usage, for text- and data-mining, for structured analysis (incl. semantic analysis) and further knowledge discovery, and, crucially, for long-term preservation in repositories and archives of any kind.

An added service could be that manuscripts submitted in advance of peer-endorsement having been procured, would be placed, ‘as is’, on JonasPrePubs, a ‘preprint’ server, at no cost. This could help to secure priority (as a kind of 'prophylactic' against high-jacking of ideas – which would never happen in science, of course, but better to be safe than sorry, right?).

The JONAS publishing system would also be superbly suited to scientific societies and other groupings that wish to have their own journal. Such a journal could be fully integrated in the JONAS system, provided the manuscripts are submitted fully peer-endorsed or peer-reviewed (whether or not arranged by the author(s) or the scientific society in question). The charges per manuscript for individual authors and for societies wishing to publish their journals in the JONAS system would be the same, I imagine.

The JONAS methodology could, of course, be implemented on various publishing platforms.

Jan Velterop

Monday, September 08, 2014

Does 'Open Access' include reuse?

At the end of 2001, a number of people (me included) came together in Budapest and set out to give the emerging notion that research results, particularly those obtained with public funds, should be available and usable by anybody, anywhere. There wasn’t an agreed term for that notion – ‘free online scholarship’ (FOS) and ‘free access’ were some of the terms relatively frequently used – and in Budapest we settled on the term ‘open access’. The meeting in Budapest resulted in the Budapest Open Access Initiative (BOAI) and in the declaration issued a few months later, we explained what we meant by ‘open access’ of the scholarly peer reviewed research literature:
By "open access" to this literature, we mean its free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, should be to give authors control over the integrity of their work and the right to be properly acknowledged and cited.

While this definition has a flaw – there is no mention of immediacy in it – it clearly does include the right to reuse.

So why has there be a recantation of one of the original signatories of the BOAI definition (perhaps more than one, but that I don’t know, and I doubt it)? And why has the BOAI definition been watered down, even adulterated, by some other people. ‘Free access’, ‘gratis access’, ‘public access’, etc. all disregard reuse, a crucial element of the notion of ‘open access’ and of its BOAI definition (as well as of the Bethesda and Berlin Statements on OA – The author(s) and copyright holder(s) grant(s) to all users a free, irrevocable, worldwide, perpetual right of access to, and a license to copy, use, distribute, transmit and display the work publicly and to make and distribute derivative works, in any digital medium for any responsible purpose, subject to proper attribution of authorship.”). The Creative Commons Attribution licence (CC-BY) best captures the intention of these definitions.

What are the motives of those who don’t like CC-BY and the reuse element of the BOAI/Bethesda/Berlin definitions and do what they can to water it all down to access without reuse?

Are these some?
  • Expediency – giving up difficult to reach ideals for potentially easier to reach, though sub-optimal, goals;
  • Appeasement – giving in to established powers and processes;
  • Putting career advancement above the advancement of science;
  • General contrarianism.
Quite possibly a combination of these, and more. Let’s have an open dialogue, including as John Wilbanks suggested, “about the ways publishers are exploiting green to undermine OA.

Comments welcome.

Jan Velterop

Thursday, September 04, 2014

Achieving True Open Access Ain’t Easy

In December of 2001, a number of people who wanted to increase the efficacy and usefulness of scholarly communication, particularly research results published in the peer-reviewed journal literature, came together in Budapest. Quickly, a consensus emerged as to what that would mean:
Peer-reviewed journal articles should be freely available on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, should be to give authors control over the integrity of their work and the right to be properly acknowledged and cited.

We called it “Open Access”, and in February of 2002 the Budapest Open Access Initiative (BOAI) statement was published. It is fair to say that we (I was one of them) probably underestimated the difficulty of reaching the goal we set ourselves. It was – and still is – very difficult.

Shortly after, in December of 2002, the CC-BY (Creative Commons Attribution) licence was publicly released, which captured the letter and spirit of the BOAI notion of Open Access very well. For a while, Open Access and CC-BY were, to all intents and purposes, synonymous.

Apart from stating a goal, we also came up with two strategies to achieve it (later called ‘green’ and ‘gold’, respectively):
  • Self-archiving, by the author(s), in open electronic archives or repositories, manuscript versions of articles (to be) published in traditional subscription journals – later called the ‘green’ road;
  • Publishing ‘born-open-access’ articles in journals set up to provide open access to the formally published version at the point of publication – later called the ‘gold’ road.

The strategies were straightforward, it seemed. That proved to be an illusion. Strategy one, self-archiving (‘green’), was based on the idea that authors’ manuscripts, even after they had been peer-reviewed and accepted by subscription journals, were covered by the authors’ copyright and therefore they could do with them what they wanted, including posting the manuscripts in open repositories. Of course, that was correct, up until the moment that authors were transferring the copyright to any of their articles to the publishers. Yet, many publishers (reluctantly) allowed this practice, as they had allowed it for a long time already in areas such as physics, where a long-standing habit of preprint publication existed (arXiv.org) that didn’t appear to harm their subscriptions, and the conviction that the open repository landscape would be chaotic, deposit as well as access cumbersome, and repositories would contain all manner of content with all manner of access restrictions mixed in with open access material, providing an incentive for institutional and corporate users of the journals to stick with their subscriptions. That situation has changed very little. Although it has gradually become easier to find a freely accessible version of many an article, subscription levels have, on the whole, held up. And freely accessible ‘green’ articles are often not covered by a CC-BY licence and thus not freely reusable in the way the BOAI intended. When copyright has been transferred to the publisher, the author cannot subsequently attach a CC-BY licence to the version deposited in an open repository. Were that possible, and habitually done, ‘green’ might be true Open Access. As it is, ‘green’ articles are free to read (gratis access), but rarely free to reuse.

But also strategy two didn’t turn out to be straightforward. The thought was that the only difficulty to overcome was the necessary cost. Some journals are being kept afloat by subsidies, and many funding agencies allow ‘article processing charges’ (APCs) to be paid out of grants, within reason. So seemingly the cost hurdle could largely be taken, except for unfunded, impecunious authors, to whom many journals offer APC waivers. Open Access, i.e. articles published with a CC-BY licence, would result. That straightforwardness proved an illusion, too. The term Open Access is not an officially standardized one, and various publishers have started to call articles Open Access even though restrictions apply that go beyond CC-BY, such as non-commercial clauses (NC). Yet they nonetheless require author-side payment of APCs. Some even require ‘basic’ APCs for restricted access, and APC top-ups for true Open Access CC-BY licences. NC clauses potentially give the publisher the opportunity to exclusively exploit the article further (e.g. reprints) and realize more income than just the APC. I say ‘potentially’, because the sale of reprints is a commercial activity, forbidden by NC, unless copyright has been transferred to the publisher (in which case commercial exploitation is the publisher’s right) or there is an exclusive licence in place whereby the author-copyright-holder gives the publisher the right to do so. An NC clause means, in countries like Germany for instance, that the article in question cannot be used for educational purposes unless explicit permission is obtained, which makes the hurdle, in those circumstances, practically identical to the “all rights reserved” of plain copyright. The upshot is that ‘gold’ is also not always Open Access in the way the BOAI intended.

Since Open Access has become an ambiguous term, you cannot trust the label to mean what you think it does, and certainly not that it allows you to reuse the article. Only CC-BY does that (and CC-zero, which does away with attribution as well – suitable and appropriate for data).

Where do we go from here?

FFAR, I would hope. For data, the concept of FAIR is being proposed (Findable, Accessible, Interoperable, and Reusable). For journal literature, ‘interoperable’ may not be a useful notion, so I’d like to modify the idea to Findable, Freely Accessible and Reusable.

How? Well, ‘gold’ publication with CC-BY is a good way to achieve it, but there remains the hurdle of APCs. The International Council for Science, ICSU, has recently issued a report, in which they advocate the following goals for Open Access:
  • free of financial barriers for any researcher to contribute to;
  • free of financial barriers for any user to access immediately on publication; 
  • made available without restriction on reuse for any purpose, subject to proper attribution;
  • quality-assured and published in a timely manner; and
  • archived and made available in perpetuity.

1 and 2 mean that the cost of 4 and 5 need to be carried by other parties than the user or author. For authors funded by agencies who support Open Access and are willing to bear the APC costs, there is no barrier, but of course, not every author is.

There is no easy way out of this. But it’s not impossible, in principle. However, ingrained deep conservatism of the scholarly community, and particularly scholarly officialdom, is in the way. (You’d think that ‘pushing the envelope’ is endemic to science, but in reality it is applied to knowledge, not to communicating that knowledge). Imagine the following scenario:
  • The authors arrange for peers they trust to review their articles, and openly endorse their article as worthy of publication;
  • Authors publish their article, properly formatted (I’m sure services would spring up for those who’d rather not do that themselves) and accompanied by the open endorsements on one of the many free (blog) platforms available, under a CC-BY licence.

Of course, permanency and archiving in perpetuity is not guaranteed, but that used to be the responsibility of libraries in the print era, and they might wish to take that responsibility again for electronic literature. Central repositories like arXiv, bioRχiv, PubMedCentral, etc. could do that, too.

I’m sure someone could come up with modifications to this scenario that would make it more practicable, technically robust, and such. But the main hurdle to take is academic officialdom, in particular the Impact Factor counters, who would have to accept this kind of publication for career and funding purposes.

Achieving true Open Access ain’t easy. So much is clear.

Jan Velterop

Friday, March 21, 2014

Proposed open access symbol

I have proposed a new Unicode symbol to denote true open access, for instance applied to scholarly literature, in a similar way in which © and ® denote copyright and registered trademarks respectively. The proposed symbol is an encircled lower case letter a, in particular in a font where the a has a 'tail', as in a font like Arial and Times, for instance, (a), and not as in a font like Century Gothic (without the 'tail' as it were).

My proposal should be on the Unicode discussion list (http://www.unicode.org/consortium/distlist-unicode.html), and I am soliciting support, and input from technically-minded as well as legally-minded open access supporters.

This is the symbol I have in mind:















Jan Velterop

Wednesday, December 11, 2013

Lo-fun and hi-fun

I have recently been talking to some major (and minor) publishers about what they could do in regard of open access, given the increasing demand, even if converting to ‘gold’ open access models is not realistic for them, in their view. I suggested that they should make human-readable copies of articles freely accessible immediately upon publication. Access to human-readable articles would of course not satisfy everybody, but it would satisfy the ‘green’ OA crowd, if I assume Stevan Harnad is their prime spokesperson. He dismisses machine-readability and reuse as distractions from his strategy of ‘green’ open access, and he even supports embargoes, as long as articles are self-archived in institutional repositories, which is his primary goal. Human-readable final published versions directly upon publication would be an improvement on that. It would also likely satisfy the occasional reader from the general public, who wishes to be able to access a few scientific articles.

How could those publishers possibly agree to this? Well, I told them, they could reconsider their view that there is a fundamental difference between the published version of an article and the final, peer reviewed and accepted author manuscript (their justification for allowing the author-manuscript to be self-archived). There may well be, of course, and there often is, but it is not likely to be a material difference in the eyes of most readers. Instead of making much (more than there usually is) of any differences in content, they could distinguish between low-functionality versions and high-functionality ones of the final published article, the ‘lo-fun’ version just suitable for human reading (the print-on-paper analogue), and the ‘hi-fun’ version suitable for machine-reading, text- and data-mining, endowed with all the enrichment, semantic and otherwise, that the technology of today makes possible. The ‘lo-fun’ version could then be made freely available immediately upon publication, on the assumption that it would not likely undermine subscriptions, and the ‘hi-fun’ version could be had on subscription. Librarians would of course not be satisfied with such a ‘solution’.

Although initially greeted with interest, the idea soon hit a stone wall. Although no one has explicitly said that they would never do this, the subsequent radio silence made me conclude that among the publishers I talked with the fear might have emerged that a system with immediate open access to a ‘lo-fun’ version accompanied by a ‘hi-fun’ version paid for by subscriptions would expose the relatively low publisher added value in terms of people’s perceptions and in terms of what they would be prepared to pay for it. That fear is probably justified, I have to give it to them.

There is no doubt that formal publication adds value to scientific articles. The success of the ‘gold’ open access publishers, where authors or their funders are paying good money for the service of formal publication, is testament to that. There must be a difference – of perception at the very least – between formally published material and articles ‘published’ by simply depositing them in an open repository. That added value largely consists of two elements: 1) publisher-mediated pre-publication peer review and 2) technical ‘production’, i.e. standardised to a sufficient degree, correctly coded (e.g. no ß where a β is intended), ‘internet- and archive-proof’,  rendered into several user formats, such as PDF, HTML and Mobile, aesthetically pleasing where possible, interoperable, search-engine optimised, and so forth. The first element is mostly performed by the scientific community, without payment, and although the publisher organises it, that doesn’t amount to a substantial publisher-added value, in the common perception. The second element on the other hand, is true value added by the publisher, is seen as such by reasonable people, and it is entirely justifiable for a publisher to expect to be paid for that. There are some authors who could do this ‘production’ themselves, but the vast majority make a dog’s dinner out of it when they try.

There is of course a third element in the equation: marketing. Marketing is responsible for brand and quality perception. Quality mainly comes from good authors choosing to submit to a journal. Getting those good authors to do that is in large part a function of marketing. The resulting brand identity, sometimes amounting to prestige, is also an added value that a self-published article, even if peer-reviewed, lacks. But alas, it is not commonly seen to be an important value-add that needs to be paid for.

Having 'lo-fun' and 'hi-fun' versions of articles makes the publishers’ real contribution explicit. That’s the rub, of course.

Back to ‘gold’, I’m afraid. Or rather, not so afraid, as ‘gold’ OA doesn’t have any of the drawbacks of ‘lo-fun’. Fortunately ‘gold’ is more and more showing to be a healthily viable and sustainable business model for open access, at least as long as the scientific community sets so much store by publisher-mediated pre-publication peer review (see previous post for my thoughts on that).

Jan Velterop

Tuesday, November 05, 2013

Essence of academic publishing

Let me start with a bit of context, all of which will be known, understood and widely discussed. The blame of unaffordability of the ever-increasing amount of scholarly literature, be it because of high subscription prices or article processing fees for ‘gold’ open access, is often laid at the door of the publishers.

The blame, however, should be on the academic preoccupation with the imperative of publisher-mediated prepublication peer review (PPR).

Of course, publishers, subscription-based ones as well as open access outfits, have a business which depends to a very large degree on being the organisers of PPR and few of them would like to see the imperative disappear. The ‘need’ – real or perceived – for publisher-mediated PPR in the academic ecosystem is the main raison d’être of most publishers. And it is responsible for most of their costs (personnel costs), even though it is actually carried out by academics and not publishers. The technical costs of publishing are but a fraction of that, at least the cost of electronic publishing (print and its distribution are quite expensive, but to be seen as an optional service and not as part of the essence of academic publishing).

Despite it being the imperative in Academia, publisher-mediated PPR has flaws, to say the least. Among causes for deep concern are its anonymity and general lack of transparency, highly variable quality, and the unrealistic expectations of what peer review can possibly deliver in the first place. The increasing amount of journal articles being submitted is making the process of finding appropriate reviewers not easier, either.

Originally, PPR was a perfectly rational approach to ensuring that scarce resources were not spent on the expensive business of printing and distributing paper copies of articles that were indeed not deemed to be worth that expense. Unfortunately, the rather subjective judgment needed for that approach led to unwelcome side effects, such as negative results not being published. In the era of electronic communication, with its very low marginal costs of dissemination, prepublication filtering seems anachronistic. Of course, initial technical costs of publishing each article remain, but the amounts involved are but a fraction of the costs per article of the traditional print-based system, and an even smaller fraction of the average revenues per article many publishers make.

Now, with the publishers’ argument of avoiding excessive costs of publishing largely gone, PPR is often presented as some sort of quality filter, protecting readers against unintentionally spending their valuable time and effort on unworthy literature. Researchers must be a naïve lot, given the protection they seem to need. The upshot of PPR seems to be that anything that is peer reviewed before publication, and does get through the gates, is to be regarded as proper, worthwhile, and relevant material. But is it? Can it be taken as read that everything in peer-reviewed publications is beyond doubt? Should a researcher be reassured by the fact that it has passed a number of filters that purport to keep scientific ‘rubbish’ out?

Of course they should. These filtering mechanisms are there for a reason. They diminish the need for critical thinking. Researchers should just believe what they read in ‘approved’ literature. They shouldn’t just question everything.

Or are these the wrong answers?

Isn’t it time that academics who are relying on PPR ‘quality’ filters – and let us hope it’s a minority of them – should stop believing at face value what is being presented in the ‘properly peer-reviewed and approved’ literature, and go back to the critical stance that is the hallmark of a true scientist: “why should I believe these results or these assertions?” The fact that an article is peer-reviewed in no way absolves researchers of applying professional skepticism to whatever they are reading. Further review, post-publication, remains necessary. It’s part of the fundamentals of the scientific method.

So, what about this: a system in which authors discuss, in-depth and critically, their manuscripts with a few people who they can identify and accept as their peers. And then ask those people to put their name to the manuscript as ‘endorsers’. As long as some reasonable safeguards are in place that endorsers are genuine, serious and without undeclared conflicts of interest (e.g. they shouldn’t be recent colleagues at the same institution as the author, or be involved in the same collaborative project, or have been a co-author in, say, the last five years), the value of this kind of peer-review – author-mediated PPR, if you wish – is unlikely to be any less than publisher-mediated PPR. In fact, it’s likely to offer more value, if only due to transparency and to the expected reduction in the cost of publishing. It doesn’t mean, of course, that the peer-endorsers should agree with all of the content of the articles they endorse. They merely endorse its publication. Steve Pettifer of the University of Manchester once presented a perfect example of this. He showed a quote from Alan Singleton about a peer reviewer’s report[1]:

"This is a remarkable result – in fact, I don’t believe it. However, I have examined the paper and can find no fault in the author’s methods and results. Thus I believe it should be published so that others may assess it and the conclusions and/or repeat the experiment to see whether the same results are achieved."

An author-mediated PPR-ed manuscript could subsequently be properly published, i.e. put in a few robust, preservation-proof formats, properly encoded with Unicode characters, uniquely identified and identifiable, time-stamped, citable in any reference format, suitable for human- and machine-reading, data extraction, reuse, deposit in open repositories, printing, and everything else that one might expect of a professionally produced publication, including a facility for post-publication commenting and review. That will cost, of course, but it will be a fraction of the current costs of publication, be they paid for via subscriptions, article processing charges, or subsidies. Good for the affordability of open access publishing for minimally funded authors, e.g. in the social sciences and humanities, and for the publication of negative results that, though very useful, hardly get a chance in the current system.

Comments welcome.

Jan Velterop


[1] Singleton, A. The Pain Of Rejection, Learned Publishing, 24:162–163
doi:10.1087/20110301

Tuesday, February 05, 2013

Transitions, transitions


Although I am generally very skeptical of any form of exceptionalism, political, cultural, academic, or otherwise, I do think that scholarly publishing is quite different from professional and general non-fiction publishing. The difference is the relationship between authors and readers. That relationship is far more of a two-way affair for scholarly literature than for any other form of publishing.

Broad and open dissemination of research results, knowledge, and insights has always been the hallmark of science. When the Elseviers/Elzevirs (no relation to the current company of the same name, which was started by Mr. Robbers [his last name; I can’t help it] a century and a half after the Elsevier family stopped their business), among the first true ‘publishers’, started to publish scholarship, for example the writings of Erasmus, they used the technology of the day to spread knowledge as widely as was then possible.

In those days, publishing meant ‘to make public’. And ‘openness’ was primarily to do with escaping censorship. (Some members of the Elsevier family went as far as to establish a pseudonymous imprint, Pierre Marteau, in order to secure freedom from censorship). But openness in a wider sense — freedom from censorship as well as broad availability — has, together with peer-review, been a constituent part of what is understood by the notions of scholarship and science since the Enlightenment. Indeed, science can be seen as a process of continuous and open review, criticism, and revision, by people who understand the subject matter: ‘peers’.

The practicalities of dissemination in print dictated that funds must be generated to defray the cost of publishing. And pre-publication peer review emerged as a way to limit waste of precious paper and its distribution cost by weeding out what wasn’t up to standards of scientific rigour and therefore not worth the expense needed to publish. The physical nature of books and journals, and of their transportation by stagecoach, train, ship, lorry, and the like, made it completely understandable and acceptable that scientific publications had to be paid for. Usually by means of subscriptions. However, scientific information never really was a physical good. It only looked like that, because of the necessary physicality of the information carriers. The essence of science publishing was the service of making public. You paid for the service, though it felt like paying for something tangible.

The new technology of the internet, specifically the development of web browsers (remember Mosaic?), changed the publishing environment fundamentally. The need for carriers that had to be physically transported all but disappeared from the equation. The irresistible possibility of unrestrained openness emerged. But something else happened as well. With the disappearance of physical carriers of information, software, etc. the perception of value changed. The psychology of paying for physical carriers, such as books, journals, CDs, DVDs is very different from the psychology of paying for intangibles, such as binary strings downloaded from the web, with no other carrier than wire, or optical cable, or even radio waves. In order to perceive value, the human expectation — need, even — for physical, tangible goods in exchange for payment is very strong, though not necessarily rational, especially where we have been used to receiving physical goods in exchange for money for a very long time. That is not to say that we wouldn’t be prepared to value and to pay for intangibles, like services. We do that all the time. But it has to be clear to us what exactly the value of a service is — something we often find more difficult, reportedly, than for physical goods.

This is a conundrum for science publishers. Carrying on with what they are used to, but then presented as a service and not ‘supported’ by physical goods any longer, can look very ‘thin’. Yet it is clear that the assistance publishers provide to the process of science communication is a service par excellence. Mainly to authors ('publish-or-perish') and less so to readers (‘read-or-rot’ isn’t a strong adage). Hence the author-side payment pioneered by open access publishers (Article Processing Charges, or APCs).

Although it would be desirable to make the transit to open access electronic publishing swiftly, the reality of inertia in the ‘system’ dictates that there be a transition period and method. This transition is sought in many different ways: new, born-OA journals that gradually attract more authors; hybrid journals that accept OA articles against author-side payment; ‘green’ mandates, that require authors to self-archive a copy of their published articles; unmediated, ‘informal’ publishing such as in arXiv; even publishing on blogs.

What may be an underestimated transition — and no-doubt a controversial one — is a model (a kind of ‘freemium’ model?) that’s gradually changing from restrictive to more and more open, extending the ‘free’, ‘open’ element and reducing the features that have to be paid for by the user. I even don’t think it is recognized as a potential transition model at the moment at all, but that may be missing opportunities. Let’s take a look at an example. If you don’t have a subscription you can’t see the full-text. However, where only a short time ago you saw only the title and the abstract, you now see those, plus keywords and the abbreviations used in the article, its outline in some detail, and all the figures with their captions (hint to authors: put as much of the essence of your paper in the captions). All useful information. It is not a great stretch to imagine that the references are added to what non-subscribers can see (indeed, some publishers already do that), and even the important single scientific assertions in an article, possibly in the form of ‘nanopublications’, on the way to eventual complete openness.

Of course, it is not the same as full, BOAI-compliant open access, but in areas where ‘ocular’ access is perhaps less important than the ability to use and recombine factual data found in the literature, it may provide important steps during what may otherwise be quite a protracted transition from toll-access to open access, from a model based on physical product analogies to one based on the provision of services that science needs.

Jan Velterop