The complaint is already an age old one, but it still does rear its head regularly. There is a thread on the Liblicense-list called 'Does More Mean More?' and David Goodman expresses concern about journal fragmentation on the SPARC OA Forum (SOAF).
Are the complaints and concerns justified? In just about every walk of life there is more information than one can comfortably deal with; the phenomenon is not limited to the academic world. It is pretty much a fact of life. There is not much one can do about the existence of ever more information in science, except perhaps to halt scientific inquiry and research. Few would argue that it would make sense to go down that route. So the question is: how much scientific information should be made available, i.e. published?
I think it should be as much as possible. There is no place for 'quantity control' of information. Perhaps someone can come up with a way to control duplication, that would be useful, but true duplication doesn't occur all that often, is my impression. There are many articles that could broadly be called 'confirmatory', but they do usually introduce a different angle, population, dataset, or other variable. And if they don't, and just verify research carried out by others, that, too, serves a purpose. If anything, not enough information is being published. Think about clinical trials, for instance, or negative results, which aren't published anywhere near often enough, even though their publication could save a lot of research effort, time and money. Often enough, negative results are simply not being published because journals won't have them.
But 'information' is not the same as 'amount of articles'. We all know about 'salami-slicing', when a given amount of information is published in a number of articles, where putting them in just one article would be perfectly reasonable and possible. This is of course a consequence of the 'publish-or-perish' culture that has taken hold of science.
'Publish-or-perish' may be considered necessary in the scientific 'ego-system' to drive research forward, but it does have some uncomfortable side-effects, of which driving up the cost of maintaining 'the minutes of science' is perhaps even one of the less serious. It also drives a major inefficiency in the system, namely the quest for getting associated with the highest possible Impact Factor (IF). (I say 'associated with', because having an article in a journal with a high IF doesn't mean that the article in question will have a high number of citations. The IF is an average - many articles have lower citations and many have higher ones -, the IF a historical figure - past performance is no guarantee for the future -, and the IF covers a specific time-window - cycles of citations are rather different in different areas -, and yet it is 'attached' to an article the minute it is accepted for publication.) Whence the inefficiency? Well, it used to be so that articles were mostly submitted to a journal that was considered most appropriate by the author, but the quest - or ego-systematic necessity, perceived or real - to get associated with the highest possible IF has lead to speculative submissions to journals with such an IF. This in turn has lead to overburdening of peer-reviewers, high rejection rates, time-wasting, increasing risk of ideas being purloined or priorites snatched, and general inefficiencies to do with the consequent 'cascading' of many articles through the system.
As for the idea that there should be too many journals, that's a different kettle of red herrings. In the modern world, journals are just 'tags', 'labels' that are attached to articles. These labels stand for something, to be sure, and not just for quality and relevance, but many also indicate a school of thought or a regional or national flavour. As such they are an organising mechanism for the literature, a location and stratification method if you wish. As fragmentation is not a problem in itself anymore with the availability of aggregators in most areas and link-outs to the actual articles on the Web, only extreme serendipitous browsing might conceivably suffer. We have to realise, however, that this was only ever possible in journals with the widest scope (serendipitous browsing that is facilitated by having journals with just a wider - as opposed to the widest - scope is also achieved by following a few more specialist journals rather than one wider title, and that's not a materially greater chore with electronic alerting systems in place). Inevitably the likes of Science and Nature are always mentioned, even though they represent a tiny proportion of the total literature and scaling up their publishing formula to the whole - or even a substantial part - of the literature is simply not possible, and hasn't been for at least the last 50 years or so due to the sheer weight of published research.
A curious argument is to blame journal speciation on the problem of vanity-publishing, as David Goodman does. Curious, because he is right, although I suspect he might not have the same reasons in mind as I do. In my view, just about all journal publishing is vanity publishing. Or maybe I should say 'career-advancement publishing'. Forced upon researchers by the tyranny of 'publish-or-perish', impact factors, and all manner of research assessment exercises.
Was Einstein right when he said that "not everything that counts can be counted and not everything that can be counted, counts"?
Open access publishing, paid for out of research funds, as a cost of research, does not solve all these issues, but it does allow more information to be published while the cost of publishing stays in line with the research effort. I wonder if the problems of more articles and journals discussed on the email lists mentioned would be seen in the same light if the link between the amount of literature that is generated and its cost were restored.