Sunday, January 22, 2006

Open access: facts and experiments

In a discussion on the liblicense list, Joe Esposito recently made an interesting comment :

"... the perception that [OA has a great deal of momentum] is causing some publishers to move more aggressively into OA "experiments" than they might otherwise if they had their facts straight."

Though perhaps many facts are probably there as a result of experiments, he is specifically referring to recent measurements of the number of articles that are available with OA, which seem to support the notion that there is a growing momentum.

I can reassure him. Publishers (at least the ones I regularly talk to), do not 'aggressively' move into experiments. Quite the contrary, is my impression. The 'facts' he is talking about are also rather difficult to establish without experiments, so any confusion about them may be caused precisely because publishers do not move more aggressively into experiments.

What are some of the facts at hand that justify experiments?

1. A very small proportion of the officially published peer reviewed scientific literature is freely available with open access;

2. Much of the officially published peer reviewed scientific literature that is in some way available with OA, is OA only in the unofficial authors' versions;

3. Being peer reviewed by and officially published in scientific journal is what gives most articles their 'authority';

4. This 'authority' also cleaves to unofficial authors' versions on open repositories, when the bibliographic reference of the official article in the journal is given;

5. The economic viability of traditionally published journals is almost entirely dependent on income from the dissemination function of the official publishing process (i.e. on the subscription income);

6. Unlimited dissemination is precisely what authors can achieve without the official publishing process and without appreciable cost, just by depositing their articles in open repositories.

Based on these facts, one could perhaps envision a set of experiments taking place. Before describing the experiments, though, I am making a few assumptions: a) conditions remain such as the social imperative 'publish or perish' and the need for peer review, and b) desiderata remain such as the economic self-sustainability of journals and the stability of the system.

Experiments that could be done:

I. Co-existence
Can a journal thrive economically on subscription income if the total content is available with open access? Variables: author's version vs. published version; immediate OA vs. delayed OA; different disciplines.

Given that such experiments measure the willingness of librarians and their superiors to maintain paid subscriptions in the long term, even though all content is freely available with open access, they can potentially take a long time to yield conclusive results, if ever.

II. Article charges
Can a journal thrive economically solely on processing charges per article, paid by or on behalf of the authors, and thus deliver full and immediate open access? Variables: different fee levels; different disciplines.

Such experiments measure the willingness of funders and institutions to re-direct the money that is now spent on behalf of the readers on subscriptions, to spending it on behalf of the authors on article processing charges, thereby gaining the benefit of open access. At the same time they measure the willingness of institutions to accept a redistribution of the costs for scientific literature in a way that may make these costs higher than they were for research-intensive institutions (more authors) and lower than they were for teaching-intensive ones (more readers).

III. Transitions and hybrids
Can a journal thrive economically by giving Academia (via the authors) the choice, per article, of either paying article charges for those to be published with open access, or transferring exclusive rights to the journal so that the cost of articles to be published traditionally can be recovered via subscriptions?

IV. Scale
This is perhaps feasible as thought experiment. If I, II, or III succeed on the scale of one or a few journals, can it succeed on the scale of the majority of peer reviewed scientific journals? Thoughts and comments are eagerly awaited.

Jan Velterop