I really should have made a list. Okay, I need milk, coffee, mixers (for ‘tea’), something for dinner (not grubs again, hey, I may be an insectivore but I still like hamburgers now and then), um, and what else? I'm sure I can think of something. Yes, I'm at the market–I'm not always working. Okay, most of the time, but there are errands to do, too. As soon as I get back I'll get started on addressing the review of my last paper–not sure if that's work or errand?

Ugh, reviews. You know, the eleven pages of comments on our most recent six-page manuscript? Sure, some of the comments are great ideas, and some are just okay ideas, and some seem to come from the Land of Imagination, where ideas just come from nowhere. So much of it feels like a shopping list: the reviewers think, hey, I can think of lots more you could do. You feel my pain, I know you do.

And the trouble is, I know I could argue that the vast majority (or all) of the proposed experiments will not alter the conclusions, regardless of the results, and that the cost and time these will take will not substantially improve the paper. But I won't, because my trainees will tell me that they'd rather not take the chance that the paper will be rejected after all the work that has gone into it. They'd rather just do the experiments.

So we'll do them, and we'll write a point-by-point response showing that we did every single thing, and ooze that the critiques were so helpful and the reviewers are so appreciated for the valuable time they spent working to improve our paper. And how we so hope that they will agree that it really is so very much improved, thank you. And we'll know that there is, indeed, more we've added.

Okay, that isn't really correct. It often happens that experiments or changes are suggested by reviewers who are honestly trying to do a good job, and yes, doing these really does improve the paper. The trouble is, we don't really get to choose which things will improve the paper – we just do everything and hope for the best. And of course this takes a lot of time away from the work we feel is truly important, because after all, getting the work published is our job. And of course (of course), there's no guarantee that doing everything asked for will be satisfactory, and we'll have to go through the process again, with another journal, and there will be another shopping list.

This state of affairs has a lot of problems. We spend a lot of time and money on experiments we don't feel are important (and indeed, these often wind up in supplemental figures that hardly anyone looks at). Sure, we can live with that, but there's another problem I'm not sure I can live with. This situation almost begs for inaccuracies, misinterpretations and, I fear, outright misrepresentation. The vast majority of us would never sanction this, let alone do it. But my nightmare scenario is that a graduate student, facing a severe deadline (you cannot submit your thesis by the deadline unless your paper is in press, and if you miss the deadline you will have to pay for another year of registration), and perhaps feeling that this research has reached the point of being irrelevant anyway (I hate my project, I hate my committee, I hate my life) will just come up with the data (inaccurate, misinterpreted, and sometimes, I fear, misrepresented) and hope no one notices. Some of you already believe that this nightmare happens all the time, and not just with students facing deadlines.

But what can we do? How can we break this chain we've forged of fierce reviewing with attendant shopping lists of more and more experiments? Those of us who do a lot of reviewing know that the answer isn't to simply stop: there are many commentaries out there urging us to review as constructive agents working to promote good science. But if we try to do this, to say, hey, this is a really nice paper that should be published as is, our experience tells us that editors will essentially shelve our review in favor of the next one that includes the shopping list. Oh, the editors don't mean to do that, but this is what happens in practice. Indeed, many of my colleagues have told me that if they see such a nice paper, which they would happily accept in the first round, they add in ‘easy’ experiments in order to appear serious to the editor. And editors have told me that when they see a review that is ‘too friendly’, they do, in fact, just assume that it's from a ‘friend’. It seems hopeless.

But hey, this is Mole talking here – of course I have a suggestion. I'm sure that most of you won't like it, but hear me out. What I propose is to do away with the usual approach, that of giving reviewers the usual options: Accept, Minor Revisions, Major Revisions, Reject. Instead, there should be only two options: Thumbs Up or Thumbs Down. If the conclusions of the paper are uninteresting, or if the data do not support those conclusions, then ‘Thumbs Down’ must be the recommendation. But if the work is deemed to have value, even if it can be substantially improved, then the only option left to the reviewer is ‘Thumbs Up’.

That's not all, though; now comes the fun part. The reviewer then provides the list of all the ways the paper could be improved, but these are now suggestions. Sure, there are those who will happily toss these away, seeking to get the work out quickly, even if some of these would very much improve the paper. But most of us, and I know I'm one of them, will carefully evaluate the suggestions and adopt those we agree would, indeed, elevate the value of the work, and at least try to provide this information. That is, we will achieve what those commentaries have cried for: a collaborative, scientific enterprise in which critical review becomes constructive.

But Mole, you say (see? I'm listening!), most of the time we'll get a mix of Thumbs Up and Thumbs Down decisions! What then? Aha! I say. That's what editors are for. The editors do try to mediate between authors and reviewers, but the problem is that it's always difficult to work through these pages and pages of comments and simply easier (sometimes the only way) to return everything to the authors to sort through. But now those with the Thumbs Down recommendation will have to explain why this choice should hold sway, and if the argument is compelling, the recommendation must stand. The same can be said for a Thumbs Up recommendation that effectively makes its case. I propose that we do not say, “I would believe the conclusions if the data were different than they are.” Yes, the editors can ask us to rebut a Thumbs Down decision, and if we authors do that effectively, we may change it.

But what we will now avoid is that hugely problematic circumstance that says, “Dear Author, we will be happy to reconsider your paper for publication when you can show us specific results of a set of experiments you have not yet performed.” That isn't how science is supposed to work, right? We do experiments to find out what the results are. Sure, I'm sure it happens that an experiment that truly tests a conclusion fails, and honest authors then reconsider their conclusions and produce another paper that now contains conclusions that stand up to such tests (indeed, I've experienced this myself). But unfortunately, I don't think that happens very much, and for good reason: when we get the ‘wrong’ result of such a test, we can think of more complicated reasons why this happened and our conclusions were still correct. And then we get into a back and forth with the reviewers that leads to more and more experiments to test the new conclusions.

And let's say that we publish a paper without performing such a test, and it turns out that our conclusions were indeed wrong. What happens then? Think about it–you know what happens: others (or even we) show that the conclusions were wrong and we publish that, and the field moves on, as vigorous as ever. Sure, this produces controversy, but so what? You think that doesn't go on all the time? How often do we discuss a paper, say in our journal club, a paper that has been through all of the shopping lists from multiple reviewers, and we still doubt the conclusions? Hey, that's science. We should do these things publicly, in full view of the literature, and not behind the scenes with a handful of skeptical reviewers.

I think we should try this out. I think we should begin using the two-choice review process–perhaps regardless of whether the journals adopt it or not. Accept a paper that has value, and then suggest (but do not demand) ways that it can be improved at the discretion of the authors. Or reject a paper that has a central flaw that undermines its value, and explain why the paper cannot be further considered. Authors can write a new paper if new work fixes the problems, but at least they will know what the central problems actually were, rather than having to sort through a list of things that may or may not represent the actual reason that the work was not considered acceptable. And when the paper is actually pretty good, it will be accepted regardless of whether additional experiments work or not. That would be really sweet.

Speaking of sweet, I still have to pick out something for dinner. Hmm, here's a nice loaf of bread, some fruit and a ripe cheese (I may be an insectivore, but I don't always eat insects). Thumbs up!