Dear Mole,
I concur with your article (J. Cell Sci. 122, 1931-1932)...but, now I know why we absolutely have to see the DNS in the supplemental section. Your supplementary materials (figures S2 and S5) were simply anecdotal and do not pass muster. The data have not been subjected to rigorous torture by statistics – i.e. tea-test, pee-test, some sort of deviation etc.
Your humble servant,
Lesser Mole
Dear Mole,
Is it déjà vu all over again? The subject of supplemental data and its burden on authors and publishers is a thorny one. But, it seems to me that it relates back to the difference between hypothesis-testing experiments and hypothesis-supporting experiments – the supplemental stuff is always more of the `supporting' and the `minor control' stuff you'd expect to have been done anyway. And aren't the referees that are asking for all of this stuff merely authors in a snit, or bench monkeys on a badly run youth-training scheme (and we've been here before)? IMO, most people tacitly like the notion of the ten extra figures that are often associated with a paper in the sexy journals. It gives them comfort that the results really might be true, but this is a dangerous illusion – at least in some cases (and certainly not all). It also helps authors insist that they reeeeaaaaaally are right – look, there are tons of data to support us!
But the real test isn't the authors' own supporting data. Once-upon-a-time-long-ago, hypotheses weren't really true until someone else had shown similar results, preferably using a different model. And, although they were second, these people would still get to publish their papers somewhere respectable. And another thing: ages ago, editors used to be able to say which revisions were really needed, as opposed to merely expecting all of the reviewers' suggestions to be incorporated. I can see it now: `Dear Messrs Watson and Crick, please could you include some more data in your paper with the DNA stuff from several species...e.g. different coloured ones, those utilising different methods of locomotion, and maybe a monotreme or two?'
Anyway, should we have umpteen supplementary figures? Well, maybe, but surely there's the added burden of yet more for reviewers to read? Also, if our reviewingcolleagues have had to provide tons of the supplementary stuff in their papers, they'll feel that everyone else needs to as well. So, it's probably too late to stem the tide. And what about the effects of this on smaller labs and budgets? Yes, it will make it harder for smaller labs and those with shallower pockets. And since you were interested in value-for-money, have you ever been on a site visit where a low cost-per-paper was regarded as anywhere near as important as publication in good or sexy journals? The effect of this kind of `selective pressure' is that it is possible to run a mediocre big-ish lab, but almost impossible to maintain a small-ish mediocre lab – or at least a lot harder...Not really a good result if good value-for-money is the desired outcome!
I'd suggest that you might want to beg or coerce reviewers to re-think their reviews. Being critical is OK, and rigour is essential. However, as long as the data are reliable and the arguments about interpretation are thorough, then the paper is the authors' business and it isn't for the reviewers to take it over. Yes we can? Alas, probably we won't.
Caledonian Caveman
Dear Lesser Mole and CC,
Oh, I do love to get letters! And while the implied rejection by LM is acknowledged (and very funny), I want to ensure my colleague that I did indeed perform the pee-test, but fortunately did not include it.
CC raises a useful and important point, which is worth reiterating – what really is reviewing for? Once upon a time, it was meant as a quality-control step to ensure that important controls had not been missed, or that a perceived advance that had in fact already been done by others would be picked up. But that was then. Now, it seems, reviewers (pretty much all reviewers, I'm afraid) feel it is mandated to provide ideas to make a paper more interesting. Anything they can imagine being done now must be done, or else the paper simply falls flat. No advance is so marvelous that at least something can't make it better. Right?
And yes, there is that other annoying aspect, the one that tacitly suggests that if you can do it some more then maybe it might be true. You (the authors) did an experiment with one control, then with another control. So we (the reviewers) want to see both controls done at the same time. Does it add anything? Of course not, but it makes us feel better.
As you note, and you know I concur, this nonsense has a cost, and not a small one. Publishers complain that supplemental figures cost money (and they may start charging, of course), but that is nothing compared to the cost of doing marginal experiments that only incrementally advance the findings. What, oh what, can we do?
Of course, I have an idea. If we won't change the way we review, then we should push others to change it for us. If editors will not (or simply cannot) put in the time to curate the reviews, then we need to change what reviews actually are. Fortunately, for us, this may not be very difficult. These days, most journals require that reviews be input into managerial websites on the wuh wuh wuh that will not go through without all the right buttons and bits being pushed, advising us to `cut and paste' our reviews into boxes that are set up on the site. This makes it easy to automatically forward the comments to the authors without even requiring an editor to actually read them (I'm not saying that editors do not read reviews, just that they don't have to). And some editors, I'm afraid, are reduced to being robots that push the buttons to make the system go, with the result that little stands between the authors and the reviewers, except the machines. I'm not saying that this is good or bad, really, just how it is.
But since this is how it is, that means that it can be changed fairly easily, with almost no human intervention or control (except by the programmers, of course, but they couldn't care less whether your paper on glycosylation of toenail proteins is properly reviewed or not). Rather than boxes into which reviewers cut and paste `Comments for Authors' (`these will be forwarded to the authors') and `Comments for Editors' (`these will not be forwarded to the authors') we can set it up a bit differently. I suggest the following changes.
As a reviewer, you will read the paper as usual and generate your notes, and then go to said website on the wuh wuh wuh to input your comments. But, gadzooks! There will be no place to do that. Instead, there will be a simple question accompanied by a pull-down menu. The question is this: `Are the conclusions of this paper of potential interest to the readers of this journal?' The choices are simple: yes or no. No other choices. If you pull down `no', a box will open that will ask for a few sentences as to why not. If `yes', another question will appear: `Are the conclusions of this paper supported by the data provided?' Again, two choices: yes or no. If `yes' is chosen, you will be invited to provide a sentence or two about how nice the paper is, and advised not to make any further comments (but you will – that's coming). If `no' (which will very often be the case), a box will open with a request to outline specifically what needs to be done to support the conclusions and why. Regardless of what answers are provided or boxes filled with cut-and-paste stuff, another box will now open that will state this: `If you have additional comments that you feel would improve the paper, please state them here. Minor changes will be addressed by the authors. Please note, authors will be advised that they are not required to perform additional experiments or major changes that are listed in this section, but may elect to improve their work according to your suggestions'.
I would love to get reviews like this. If my work just isn't interesting, I need to reconsider it. And let's not deceive ourselves – most of our submissions are going to be rejected on this basis. We're going to have to be careful (much more careful than we have been) to ensure that what we propose to publish is, indeed, interesting. But if it is interesting, I will only have to perform the experiments required to ensure that it is probably correct. Joy. And reviewers will have every opportunity to micromanage my research, which of course, is their joy. But I don't have to do it. And of course, when I say `I', I mean `we'. And when I say `we', I mean the bench monkey (sorry, valued trainee) who just wants to move on and do more interesting experiments.
Will this work? Well, this is only the beginning of the process. We need to communicate these ideas to the decision-makers, the editors, and see if with some tweaking something like this can be tried.
Whaddya think?
Mole