Sorry, I'm all thumbs today. I just dropped my lovely cup of tea. (You know I don't actually drink tea, right? The color is the same but the flavor is a bit more interesting.) I'm with lovely Professor Beaver, in her lovely beaver lodge, which is every bit as cheery as my mole hole. (I say this mostly because Prof. Beaver has very sharp teeth, but it really is nice, and the ‘tea’ is of the Lowland sort I like so much, in case anyone is wondering what to get me for Mole Day. But I digress.) We've been talking about my ideas about reviewing papers, and since we both spend way too much time reviewing papers, revising papers, and acting as editors of papers being reviewed and revised, we both have a lot to say on the matter. The ‘tea’ helps.

For those of you just joining us, here's the basic idea. We all know that the current way in which papers make their way into the literature is something like this: we spend a great deal of time and effort addressing a problem we feel is significant, performing experiments to test ideas and then revising our ideas based on the results, until we think its time to ‘put it out there and see what the reviewers want’. We submit the paper and wait, usually several weeks or more, and then, just when we've pretty much forgotten what it was about in the first place, we receive an extended shopping list of experiments that should be done to improve the paper. Most of the time these take the form of ‘if you obtain the following results of experiments you have not yet performed, we might let you publish your work’. So we (by which I generally mean ‘our trainees’) buckle down to do the experiments and try as hard as they can to get the requested results. In the best-case scenario, we actually improve the work substantially by doing this, since the experiments actually work, even if many of them are not (in our opinion) really essential to the conclusions of the paper. We spend time and money doing these experiments, though, because our first goal is to publish the work that we feel is important. But I worry that this isn't the right way to do science – we should do experiments to test ideas and explore the consequences of the results, and never to ‘prove that we are correct’.

So I suggested an alternative to the usual decision of ‘accept, minor revision, major revision, reject’. Instead, I propose we force reviewers to decide between only two choices: thumbs up or thumbs down. If the conclusions are not of interest, or if the data do not support the conclusions, we must give a thumbs down, and say why. If the conclusions are of interest and the data support them, we must say, ‘thumbs up!’ and we can then provide a number of suggestions that would improve the paper, and careful authors should consider which, if any, of these they would like to add to make their work better. Authors who receive a ‘thumbs down’ decision are free to revise the work to address the concerns, and can submit the paper to the journal again, but now the paper will be evaluated only on its merits to yield a ‘thumbs up, thumbs down’ decision again.

Beaver has a lot of problems with this idea, and you probably do too. She is concerned that most of the time, there will be a mixture of ‘thumbs up’ and ‘thumbs down’ recommendations. What then?

Good point, Beav. (I call her Beav, because long, long ago that's what they called Jerry Mathers in ‘Leave it to Beaver’. I once sat next to him on an airplane, and I was oddly surprised that he had grown up. He didn't want to talk to me at all. Probably because we were both sitting in coach. But I digress. I do that.) Okay, back to the point. This is exactly why we need editors: by forcing the discussion around only these choices, the editor has a much clearer decision to make, essentially siding with one or the other by evaluating the strengths of the arguments. What the editor and reviewers cannot do, though, is to say: if you perform these experiments (which you have not yet done) and get these results (which you may or may not obtain), we will reconsider your paper. What the editor can say is: you do not have enough evidence to support your conclusion for the following reasons. Or: we will accept your paper, but we strongly urge you to consider performing the following experiments, the results of which will be of interest (regardless of what the results turn out to be).

But Mole, says Beaver, won't the reviewers get frustrated if many of their suggestions are ‘ignored’, i.e. deemed unnecessary by the authors. If this happened on a regular basis, I could imagine that some reviewers would either start giving very basic reviews, or refuse all together. Many of them might feel that they are being asked to provide their expert opinion, and then it's being disregarded.

But Beav, I say, since all reviewers are also authors, they may enjoy the benefits when they submit their own papers. So maybe their ‘frustration’ will give way to joy?

But Mole, she says, when their papers are routinely rejected, would this then rule out subsequent rounds of peer review? Not necessarily, I say. Let's look at why a paper will get a thumbs down decision. A paper describes a series of molecular interactions in a cancer cell line and they conclude that these interactions are responsible for the cancer. The data are strong and the conclusions important, but I give it a thumbs down, because a cell line isn't a cancer – cancers happen in vivo. Now the authors perform a number of experiments and show that these interactions function in an animal model of the cancer and produce another paper. As an editor you can send it to me or to another reviewer – it doesn't matter now, because the only interest is convincing the reader that the conclusion is valid (and interesting).

On the other hand, let's say I feel that the conclusions are important and interesting, and the data support them, but the paper would be much more interesting and important if they performed another experiment. I give it a thumbs up and the authors decide they don't want to do that experiment for this paper. As a result, scientists interested in the work get to see it months before they would have otherwise, and later, in another paper by these or other authors, the experiment eventually emerges. Isn't that better? Isn't our goal to report good science and move the field forward?

So, as I understand it, says Beav, if the editor gave it an overall thumbs up, it would be entirely up to the authors to decide which of the suggested improvements they would make; if it's an overall thumbs down, the authors still have some guidance about how to improve the paper. Since I can't see journals jumping on this bandwagon right away and giving up their ‘power’, as it were (and I mean academic editors and professional editors), to decide what is necessary and what isn't, I guess the key to this whole proposal is that reviewers make it clear that any suggestions they have are just that – suggestions. And then it will be up to editors to let go of the reins a bit. Hmmm.

That's it exactly! I shout, and accidently knock down a bit of the roof as I stand up (sorry, Beav). The fact is, the editors have a great deal of control in deciding if the conclusions of a paper are sufficiently interesting to have it expertly reviewed, and then again in weighing the cumulated decisions. What I propose is taking away two things: the widespread tendency to act as a conduit for anonymous lists of ‘more stuff to do’, asking the authors to sort it out, and, just as importantly, the ‘deal making’ aspect of ‘give me these results that you don't have and we'll take it’. Science should not be directed by editors or reviewers, but they can make decisions, and they can suggest improvements to papers that are, in principle, accepted.

Look, I say, spilling a bit more ‘tea’ (sorry, Beav). The current system is antithetical to science. How often do we submit a paper that we know is incomplete, even holding some data back, waiting to ‘see what the reviewers will want’? And worse, how often do we skeptically review a paper that has been through this gauntlet and dismiss experiments we consider to have been hastily performed in response to an editorial demand? What's the worst that can happen if we try my approach? Sometimes we'll publish a conclusion that turns out to be wrong – but that happens all the time! What we'll avoid, I hope, is publishing work that is hasty, sloppy or marginal just to satisfy a demand. The process may even work a bit faster, and at least we'll know where we stand when we get a decision from a journal. Isn't that better than spending months (sometimes years) addressing ‘concerns’, only to find out at the end that we have to send it elsewhere? How does that help the dissemination of our findings? (I sit down, but accidently break the chair and scatter the dishes Beaver has provided for our ‘tea’. Sorry, Beav.)

Well, says Beaver, it bears thinking about anyway. Listen, its late, and while its been lovely, I think we should take this up again another time. Perhaps at your place?