Minutes after I posted 'Rituals', I saw this: What's the point of a tie?
JV
Wednesday, February 22, 2006
Rituals
On the Liblicense list, Heather Morrison addresses 'The Religion of Peer Review' and refers to an article by Alison McCook: Is Peer Review Broken? The Scientist, 20:2 (February 2006), page 26.
To ask if peer-review works is probably asking the wrong question. It's a ritual, not a scientific method. It's a cultural expectation. Just like wearing a necktie is in certain circles, and nobody asks whether they actually work. (They would, as a noose.) And to expect peer-review to act as an almost infallible filter is wholly unrealistic. If it is a filter of sorts, it is one that helps journal editors to maintain their journals' biases. If peer-review were a method of only ascertaining an article's scientific validity, we would neither need, nor have, so many journals. One in every discipline would suffice. But the ritual reaffirms bias. The bias of 'quality', for instance, or 'relevance' (though the question could be asked to what, exactly?). And why not? Just as bio-diversity is a good thing, 'publi-diversity' may be as well.
She also asks, in the same posting, if there is "scientific proof that current methods [of publishing] will work?", saying that the "...current approach has [...] led to the serials crisis." She has a point, asking about proof, as the question is being asked of open access publishing, so why not of traditional publishing. But talking about rituals, isn't it a ritual, too, to complain about prices increasing faster than library budgets? Nothing remotely scientific about it. There would be a point if library budgets had broadly stayed in line with research spending. But they haven't. Isn't it an article of faith that the budgets "could not conceivably rise" in line with the production of scientific literature?
Open access publishing, in addition to all the other benefits it has, also keeps the cost of scientific literature in line with research spending. This isn't, of course, proven yet, let alone scientifically. But how would one prove it without doing it in the first place? The proof of this pudding, I'm afraid, can only be in the eating, as the saying goes.
Jan Velterop
To ask if peer-review works is probably asking the wrong question. It's a ritual, not a scientific method. It's a cultural expectation. Just like wearing a necktie is in certain circles, and nobody asks whether they actually work. (They would, as a noose.) And to expect peer-review to act as an almost infallible filter is wholly unrealistic. If it is a filter of sorts, it is one that helps journal editors to maintain their journals' biases. If peer-review were a method of only ascertaining an article's scientific validity, we would neither need, nor have, so many journals. One in every discipline would suffice. But the ritual reaffirms bias. The bias of 'quality', for instance, or 'relevance' (though the question could be asked to what, exactly?). And why not? Just as bio-diversity is a good thing, 'publi-diversity' may be as well.
She also asks, in the same posting, if there is "scientific proof that current methods [of publishing] will work?", saying that the "...current approach has [...] led to the serials crisis." She has a point, asking about proof, as the question is being asked of open access publishing, so why not of traditional publishing. But talking about rituals, isn't it a ritual, too, to complain about prices increasing faster than library budgets? Nothing remotely scientific about it. There would be a point if library budgets had broadly stayed in line with research spending. But they haven't. Isn't it an article of faith that the budgets "could not conceivably rise" in line with the production of scientific literature?
Open access publishing, in addition to all the other benefits it has, also keeps the cost of scientific literature in line with research spending. This isn't, of course, proven yet, let alone scientifically. But how would one prove it without doing it in the first place? The proof of this pudding, I'm afraid, can only be in the eating, as the saying goes.
Jan Velterop
Tuesday, February 21, 2006
Too many papers, too many journals
The complaint is already an age old one, but it still does rear its head regularly. There is a thread on the Liblicense-list called 'Does More Mean More?' and David Goodman expresses concern about journal fragmentation on the SPARC OA Forum (SOAF).
Are the complaints and concerns justified? In just about every walk of life there is more information than one can comfortably deal with; the phenomenon is not limited to the academic world. It is pretty much a fact of life. There is not much one can do about the existence of ever more information in science, except perhaps to halt scientific inquiry and research. Few would argue that it would make sense to go down that route. So the question is: how much scientific information should be made available, i.e. published?
I think it should be as much as possible. There is no place for 'quantity control' of information. Perhaps someone can come up with a way to control duplication, that would be useful, but true duplication doesn't occur all that often, is my impression. There are many articles that could broadly be called 'confirmatory', but they do usually introduce a different angle, population, dataset, or other variable. And if they don't, and just verify research carried out by others, that, too, serves a purpose. If anything, not enough information is being published. Think about clinical trials, for instance, or negative results, which aren't published anywhere near often enough, even though their publication could save a lot of research effort, time and money. Often enough, negative results are simply not being published because journals won't have them.
But 'information' is not the same as 'amount of articles'. We all know about 'salami-slicing', when a given amount of information is published in a number of articles, where putting them in just one article would be perfectly reasonable and possible. This is of course a consequence of the 'publish-or-perish' culture that has taken hold of science.
'Publish-or-perish' may be considered necessary in the scientific 'ego-system' to drive research forward, but it does have some uncomfortable side-effects, of which driving up the cost of maintaining 'the minutes of science' is perhaps even one of the less serious. It also drives a major inefficiency in the system, namely the quest for getting associated with the highest possible Impact Factor (IF). (I say 'associated with', because having an article in a journal with a high IF doesn't mean that the article in question will have a high number of citations. The IF is an average - many articles have lower citations and many have higher ones -, the IF a historical figure - past performance is no guarantee for the future -, and the IF covers a specific time-window - cycles of citations are rather different in different areas -, and yet it is 'attached' to an article the minute it is accepted for publication.) Whence the inefficiency? Well, it used to be so that articles were mostly submitted to a journal that was considered most appropriate by the author, but the quest - or ego-systematic necessity, perceived or real - to get associated with the highest possible IF has lead to speculative submissions to journals with such an IF. This in turn has lead to overburdening of peer-reviewers, high rejection rates, time-wasting, increasing risk of ideas being purloined or priorites snatched, and general inefficiencies to do with the consequent 'cascading' of many articles through the system.
As for the idea that there should be too many journals, that's a different kettle of red herrings. In the modern world, journals are just 'tags', 'labels' that are attached to articles. These labels stand for something, to be sure, and not just for quality and relevance, but many also indicate a school of thought or a regional or national flavour. As such they are an organising mechanism for the literature, a location and stratification method if you wish. As fragmentation is not a problem in itself anymore with the availability of aggregators in most areas and link-outs to the actual articles on the Web, only extreme serendipitous browsing might conceivably suffer. We have to realise, however, that this was only ever possible in journals with the widest scope (serendipitous browsing that is facilitated by having journals with just a wider - as opposed to the widest - scope is also achieved by following a few more specialist journals rather than one wider title, and that's not a materially greater chore with electronic alerting systems in place). Inevitably the likes of Science and Nature are always mentioned, even though they represent a tiny proportion of the total literature and scaling up their publishing formula to the whole - or even a substantial part - of the literature is simply not possible, and hasn't been for at least the last 50 years or so due to the sheer weight of published research.
A curious argument is to blame journal speciation on the problem of vanity-publishing, as David Goodman does. Curious, because he is right, although I suspect he might not have the same reasons in mind as I do. In my view, just about all journal publishing is vanity publishing. Or maybe I should say 'career-advancement publishing'. Forced upon researchers by the tyranny of 'publish-or-perish', impact factors, and all manner of research assessment exercises.
Was Einstein right when he said that "not everything that counts can be counted and not everything that can be counted, counts"?
Open access publishing, paid for out of research funds, as a cost of research, does not solve all these issues, but it does allow more information to be published while the cost of publishing stays in line with the research effort. I wonder if the problems of more articles and journals discussed on the email lists mentioned would be seen in the same light if the link between the amount of literature that is generated and its cost were restored.
Jan Velterop
Are the complaints and concerns justified? In just about every walk of life there is more information than one can comfortably deal with; the phenomenon is not limited to the academic world. It is pretty much a fact of life. There is not much one can do about the existence of ever more information in science, except perhaps to halt scientific inquiry and research. Few would argue that it would make sense to go down that route. So the question is: how much scientific information should be made available, i.e. published?
I think it should be as much as possible. There is no place for 'quantity control' of information. Perhaps someone can come up with a way to control duplication, that would be useful, but true duplication doesn't occur all that often, is my impression. There are many articles that could broadly be called 'confirmatory', but they do usually introduce a different angle, population, dataset, or other variable. And if they don't, and just verify research carried out by others, that, too, serves a purpose. If anything, not enough information is being published. Think about clinical trials, for instance, or negative results, which aren't published anywhere near often enough, even though their publication could save a lot of research effort, time and money. Often enough, negative results are simply not being published because journals won't have them.
But 'information' is not the same as 'amount of articles'. We all know about 'salami-slicing', when a given amount of information is published in a number of articles, where putting them in just one article would be perfectly reasonable and possible. This is of course a consequence of the 'publish-or-perish' culture that has taken hold of science.
'Publish-or-perish' may be considered necessary in the scientific 'ego-system' to drive research forward, but it does have some uncomfortable side-effects, of which driving up the cost of maintaining 'the minutes of science' is perhaps even one of the less serious. It also drives a major inefficiency in the system, namely the quest for getting associated with the highest possible Impact Factor (IF). (I say 'associated with', because having an article in a journal with a high IF doesn't mean that the article in question will have a high number of citations. The IF is an average - many articles have lower citations and many have higher ones -, the IF a historical figure - past performance is no guarantee for the future -, and the IF covers a specific time-window - cycles of citations are rather different in different areas -, and yet it is 'attached' to an article the minute it is accepted for publication.) Whence the inefficiency? Well, it used to be so that articles were mostly submitted to a journal that was considered most appropriate by the author, but the quest - or ego-systematic necessity, perceived or real - to get associated with the highest possible IF has lead to speculative submissions to journals with such an IF. This in turn has lead to overburdening of peer-reviewers, high rejection rates, time-wasting, increasing risk of ideas being purloined or priorites snatched, and general inefficiencies to do with the consequent 'cascading' of many articles through the system.
As for the idea that there should be too many journals, that's a different kettle of red herrings. In the modern world, journals are just 'tags', 'labels' that are attached to articles. These labels stand for something, to be sure, and not just for quality and relevance, but many also indicate a school of thought or a regional or national flavour. As such they are an organising mechanism for the literature, a location and stratification method if you wish. As fragmentation is not a problem in itself anymore with the availability of aggregators in most areas and link-outs to the actual articles on the Web, only extreme serendipitous browsing might conceivably suffer. We have to realise, however, that this was only ever possible in journals with the widest scope (serendipitous browsing that is facilitated by having journals with just a wider - as opposed to the widest - scope is also achieved by following a few more specialist journals rather than one wider title, and that's not a materially greater chore with electronic alerting systems in place). Inevitably the likes of Science and Nature are always mentioned, even though they represent a tiny proportion of the total literature and scaling up their publishing formula to the whole - or even a substantial part - of the literature is simply not possible, and hasn't been for at least the last 50 years or so due to the sheer weight of published research.
A curious argument is to blame journal speciation on the problem of vanity-publishing, as David Goodman does. Curious, because he is right, although I suspect he might not have the same reasons in mind as I do. In my view, just about all journal publishing is vanity publishing. Or maybe I should say 'career-advancement publishing'. Forced upon researchers by the tyranny of 'publish-or-perish', impact factors, and all manner of research assessment exercises.
Was Einstein right when he said that "not everything that counts can be counted and not everything that can be counted, counts"?
Open access publishing, paid for out of research funds, as a cost of research, does not solve all these issues, but it does allow more information to be published while the cost of publishing stays in line with the research effort. I wonder if the problems of more articles and journals discussed on the email lists mentioned would be seen in the same light if the link between the amount of literature that is generated and its cost were restored.
Jan Velterop
Thursday, February 09, 2006
Does more mean more?
This is a discussion thread on the Liblicense list, and David Goodman just posted an exceptionally good comment. Here it is, in full:
Perhaps the need for publishers to be in the filtering process at all, goes back to the days of print journals which had a fixed number of pages that they could afford to print. There was then an absolute need to select, and an obvious justification for author fees for excess pages. There was also a great temptation
to accept too many articles, and many had a waiting list, sometimes of more than a year.
Especially after the web developed, such waiting lists very often led to the extensive circulation of what we now call "Accepted preprints," to the extent that the actual publication is a merely a matter of record, every one interested having already read the preprint. Having read the preprint, most of us are most unlikely
to also read the article.
Now essentially all science journals are published in both print and electronic, and this page limitation no longer applies to the electronic version, though there is still a limitattion in processing costs. Many publishers are in fact publishing
immediately the final electronic version, such as Elsevier just announced. Everyone (with a subscription) can now read the final version right away, and the print will appear eventually.
If the electronic version were the only version, and if gold OA were adopted for paying "on behalf of the author" then a publisher could afford to publish everything that met the quality standard of the journal. The quality standard of the journal
could be determined in a number of ways.
When I was still a molecular biologist, the most prestigious journal for a article after Nature was PNAS, and printed anything sent by a Member of Academy, (there was also a page charge.) One did not want to ask one's friendly Member except for the very best work, and that was the QC.
Members themselves could publish what of their own work they pleased, and were given an allowance for page charges. Their having been chosen Members was the QC. (This is why the eccentric work of some senior scientists was published in PNAS.) The practices have been progressively tightened very much since then, but page charges remain.
There is little aggregation of content in PNAS, and none at all in Nature or Science, or, within medicine, in JAMA. This too is a possible publisher's function, but not a necessary one. Reading every article that cites one's own, is a widely used filter and removes the need for an aggregator. The widespread use of both
toll and non-toll A&I services is not journal dependent, and such services in their printed form have had a useful role for centuries.
We should all welcome the current acceptance of change in the publication system--from Peter and from other publishers.
Dr. David Goodman
Associate Professor
Palmer School of Library and Information Science
Long Island University
and formerly
Princeton University Library
Perhaps the need for publishers to be in the filtering process at all, goes back to the days of print journals which had a fixed number of pages that they could afford to print. There was then an absolute need to select, and an obvious justification for author fees for excess pages. There was also a great temptation
to accept too many articles, and many had a waiting list, sometimes of more than a year.
Especially after the web developed, such waiting lists very often led to the extensive circulation of what we now call "Accepted preprints," to the extent that the actual publication is a merely a matter of record, every one interested having already read the preprint. Having read the preprint, most of us are most unlikely
to also read the article.
Now essentially all science journals are published in both print and electronic, and this page limitation no longer applies to the electronic version, though there is still a limitattion in processing costs. Many publishers are in fact publishing
immediately the final electronic version, such as Elsevier just announced. Everyone (with a subscription) can now read the final version right away, and the print will appear eventually.
If the electronic version were the only version, and if gold OA were adopted for paying "on behalf of the author" then a publisher could afford to publish everything that met the quality standard of the journal. The quality standard of the journal
could be determined in a number of ways.
When I was still a molecular biologist, the most prestigious journal for a article after Nature was PNAS, and printed anything sent by a Member of Academy, (there was also a page charge.) One did not want to ask one's friendly Member except for the very best work, and that was the QC.
Members themselves could publish what of their own work they pleased, and were given an allowance for page charges. Their having been chosen Members was the QC. (This is why the eccentric work of some senior scientists was published in PNAS.) The practices have been progressively tightened very much since then, but page charges remain.
There is little aggregation of content in PNAS, and none at all in Nature or Science, or, within medicine, in JAMA. This too is a possible publisher's function, but not a necessary one. Reading every article that cites one's own, is a widely used filter and removes the need for an aggregator. The widespread use of both
toll and non-toll A&I services is not journal dependent, and such services in their printed form have had a useful role for centuries.
We should all welcome the current acceptance of change in the publication system--from Peter and from other publishers.
Dr. David Goodman
Associate Professor
Palmer School of Library and Information Science
Long Island University
and formerly
Princeton University Library
Saturday, February 04, 2006
The joys of choice
In the previous post I questioned the validity of ‘number of journals’ as proof for the amount of publishing activity in open access. A follow-up question I have is this: why is all this a priori ‘proof’ necessary in the first place? What ‘proof’ is needed to show that open access articles are accessible to more people? It is in the very concept! What ‘proof’ is needed to demonstrate that paying an amount upfront for the service of publishing is worse, or better, for its economic sustainability than paying for subscriptions? The proof of that particular pudding is simply in de eating.
Any choices between open access and non-open access will be made by those who actually have the choice: authors and their (financial) backers. The latter (the backers) can even impose that choice. Publishers can't – and shouldn't. The only thing to do for publishers – be they societies or independent outfits – is to offer the choice.
Jan Velterop
Any choices between open access and non-open access will be made by those who actually have the choice: authors and their (financial) backers. The latter (the backers) can even impose that choice. Publishers can't – and shouldn't. The only thing to do for publishers – be they societies or independent outfits – is to offer the choice.
Jan Velterop
Sizing up opponents
A slight sense of despondency overcame me when I saw in a number of recent posts on various discussion fora about open access, that the fallacy of the number of journals being a measure of size (of activity or the amount of article published in a certain area) is alive and well. The fallacious argument is used by members of the pro-open-access camp as well those from anti-open-access circles. The pros are saying “look how many open access journals there are!” and the antis “look how few open access journals!”, either of them proving or disproving exactly nothing.
The number of articles different journals publish in a given period of time can vary by an enormous amount – a factor of 100 is relatively common. There are plenty of journals that publish 20 or fewer articles a year, and quite a number that publish 2000 or more.
Even if journals were more uniform in size, counting open access journals to establish how much peer-reviewed material is available with open access is flawed. It is with very good reason that the Bethesda Statement says “Open access is a property of individual works, not necessarily journals or publishers.” Some BioMed Central journals have non-open-access articles and an increasing number of journals will publish open access material (e.g. Springer’s 1250 odd titles and a growing selection of OUP's and Blackwell’s titles, among others).
Number of articles is a better measure than numbers of journals, but what seems more important to me is the number of opportunities that authors have to publish with open access. They have grown dramatically over the last year.
Jan Velterop
The number of articles different journals publish in a given period of time can vary by an enormous amount – a factor of 100 is relatively common. There are plenty of journals that publish 20 or fewer articles a year, and quite a number that publish 2000 or more.
Even if journals were more uniform in size, counting open access journals to establish how much peer-reviewed material is available with open access is flawed. It is with very good reason that the Bethesda Statement says “Open access is a property of individual works, not necessarily journals or publishers.” Some BioMed Central journals have non-open-access articles and an increasing number of journals will publish open access material (e.g. Springer’s 1250 odd titles and a growing selection of OUP's and Blackwell’s titles, among others).
Number of articles is a better measure than numbers of journals, but what seems more important to me is the number of opportunities that authors have to publish with open access. They have grown dramatically over the last year.
Jan Velterop
Subscribe to:
Posts (Atom)