Saturday, February 14, 2009

Industry-funded research IFfy?

In his column Bad Science, in The Guardian on Saturday 14 February, Ben Goldacre drew attention to an article in the British Medical Journal by Tom Jefferson et al in which the observation was reported that...
"Publication in prestigious journals is associated with partial or total industry funding, and this association is not explained by study quality or size."
The Impact Factor (IF) of the journals in which research funded by the public sector was published averaged 3.74 and the IF of the journals in which industry-funded research was published averaged 8.78. As Impact Factors go, that is a substantial difference. And, as Jefferson et al indicate, there was no discernable difference in terms of quality, methodological rigour, sample size, et cetera between the articles in question. Goldacre doesn't have an explanation. The suggestion is given in his column (he admits it is an "unkind suggestion") that it may have to do with journals' interest in advertisements and reprint orders – which can indeed be massive – from the very same industry that funds the research these journals publish. He doesn't say it, but this could mean, of course, that the journals accept articles based on research funded by industry, particularly the pharmaceutical industry, more readily than articles based on publicly-funded research.

I don't have an explanation for the phenomenon, either, but I doubt that journals accept industry-funded articles more easily than public sector articles. For a start, most publishers do not have in-house Editors-in-Chief who decide what's published and what not. That doesn't mean the publishers cannot have an influence on those Editors, but often it is already so difficult for them to get Editors to comply with everyday, sensible wishes, that I think this would be rather far-fetched. For publishers that do have in-house Editors-in-Chief, such influence may be more easily exerted.

A hypothesis I can imagine, however, is different and less sinister, although also to do with the massive numbers of reprints disseminated by the pharmaceutical industry. But this hypothesis would reverse cause and effect. Might it be that because of the wide dissemination, availability, and visibility of these reprints, the industry-funded articles are cited more often? After all, we know that articles are not only cited because they are the most appropriate ones, but also simply because they are the appropriate ones known to the author. (Sort of like when you ask a 'randomer' – a word I learnt from my 18-year old daughter and that I guess means random person – for the best restaurant in town, you are likely to get the best restaurant he or she knows, which is not necessarily the best restaurant in town). If articles based on industry-funded research are cited more often, the journals in which they appear get a higher Impact Factor.

If this hypothesis holds water, it would mean that wide availability is one of the important factors – with dissemination and visibility, and of course relevance – for being cited. In other words, could the results described in the BMJ article constitute evidence that open access could have a similar effect on Impact Factors as that – still hypothetically – caused by the massive numbers of reprints that the pharmaceutical industry purchases and disseminates?

Food for further study, I would think.

Jan Velterop

No comments:

Post a Comment