Original artwork by Pete Jeffs - www.peterjeffsart.com

Good afternoon! Buenos tardes, Buon pomeriggio, Guten tag, Jo napot, Boa tarde, God eftermiddag, Nuq'neH! (Okay, in Klingon this translates to ‘What do you want?’ but it is the traditional response when approached in the afternoon). My gorgeous, warm day has turned frosty cold, just like my heart when I contemplate the delayed response to our recent paper submission.

Last time, we talked about how down-right nasty the review process has become. Is it just me? Or are we reviewers much more harsh than we once were? Are we angry that some folks (not us) seem to get their papers published while we languish amid the months and years required for revisions? Are we playing the zero-sum game of ‘nobody wins, which is better than letting you win while I lose?’ Or, as I suggested some time ago (see Trouble in Flatland I), is it that we view papers as currency and base the amount of additional work we require as reviewers on the perceived ‘value’ to the authors? Are we, as reviewers, all just having a bad day/month/year/decade (as authors)? Perhaps we are just tired of reviewing so many papers?

There is an ongoing discussion about the idea that de-anonymizing reviewers will fix this problem. I completely agree that if my name is going to be attached to my review, I am absolutely going to be very nice. In fact, I won't criticize anything at all. Really. Many years ago, a close friend, Professor Ocelot, proudly announced to me that he had reviewed our paper that had recently been published in a journal with nice, shiny pages (when journals were actually published on pages). I confess that my immediate response was “Oh, were you the reviewer who made us repeat all of our experiments at a different time of day? Or were you the one who wanted us to spend eight months developing a new assay system?” To be fair, I should have been thankful that we managed to satisfy the reviewers within the year, and in any case, we remained friends. (And I will not relate how often I am confronted at meetings by inebriated trainees who just know that I am the scoundrel – not the word they use – who reviewed their paper harshly, despite any honest protestation by me that it isn't true. I'm sure that when they have subsequently reviewed my work, they are not kind; see ‘zero-sum game’. Oh, I guess I did just relate that). I'm not saying that de-anonymizing reviewers is a bad idea; I'm only saying that I would be very, very hesitant to put my name to an honest critique of any paper about which I am less than enthusiastic.

There is also a discussion about not needing journals or peer review at all. I just did a little analysis. In 2024, according to the Web of Science, there were over 2,860,000 publications, although this includes meeting abstracts and probably many other forms of publications that we do not regard as ‘papers’ (and don't require peer review). PubMed puts the number at about 190,000 (I'll use that). The average time it takes for a journal to receive reviews is 19.1 days (in 2018, according to the ‘Global State of Peer Review’ by Publon). I don't know if this is still valid; the AI I asked said ‘45–90 days’, but you know, AI. But this does seem closer to reality. Still, we'll use the 19.1 value to be especially fair. Using the smaller number, and assuming that decisions were made immediately (of course, a silly assumption), roughly over 9000 person-years were collectively spent just last year waiting for decisions on submitted papers (although experience, and AI, suggests that it's actually more than twice this number). Even for us, that's a lot of waiting.

So, the suggestion is that we dispense with this waiting – just post our papers and let each scientist decide what is worth reading. Maybe we can build better and more useful search engines so that AI can tell us what to read. Or maybe even review the papers for us. Wait – what am I thinking? Just let AI do the reading and then tell us what we want to know. (I know this sounds like a joke, but there are active initiatives being developed to do just that). Currently, AI cannot (yet, if ever) evaluate the extent to which data support a conclusion (something that we allegedly do) and can only tell us if a particular paper's conclusions seem to answer our question. If I do a simple search in my area of interest in Google Scholar, it gives me more than 4 million entries. If I spent only 10 minutes evaluating each of these, it would take me more than 76 years to complete, and that doesn't include breaks for sleeping or eating (you know, ‘work–life balance’), so I need guidance in my reading. But do I really trust that an AI will give me the guidance I need?

I suggest that we do need experts to help ensure that what seems to be a conclusion in a paper is actually supported by experimental evidence. Here's a simple example. You find that a particular disease-causing microbe is sensitive to a novel therapeutic agent and conclude that this will cure patients with the disease. That could be important, but an expert reviewer notes that your conclusion is not supported by your data. You have to either: (a) test it in patients with the disease; or (b) revise your conclusion (e.g. conclude that it might be useful in treating patients). In response, you provide data that your agent effectively kills this microbe in Ahi tuna. The expert notes that Ahi tuna do not exhibit disease in response to this microbe, but agrees that you can conclude that your treatment works in vivo (then the reviewer goes out for sushi because you made him think about Ahi tuna; sorry Professor Tuna). You then post how unfair the system is, and go on to publish your work with your original conclusions in another journal in which reviewers are not nearly so ‘picky’.

That is a best-case scenario. The reviewer was concerned that the conclusion was not supported by the available data. The system is indeed unfair, but not because expert reviewers only evaluate the extent to which data support conclusions. Many (most?) reviewers and nearly all editors demand that the authors ‘make the paper more interesting’. In our example of the agent that kills the microbe, reviewers will likely require that this agent and additional agents be tested against different microbes in a variety of settings (Mahi-Mahi, trout, Chilean seabass) and done at different times in the lunar cycle.

I've said this before, but I will say it again. It is not our job as reviewers to demand ‘predicted’ results of experiments that have not been performed. That sort of thing is antithetical to science – biology is complex and does not necessarily fit our ad hoc predictions. As reviewers this is our job:

1. Evaluate if the conclusions, as stated, are sufficiently interesting to warrant attention. It might be that these conclusions are already accepted by the community, based on strong data by others. Sometimes you may feel that the advance is ‘incremental or confirmative’, which might reduce the attention that will be given to the findings. Or you might feel that the conclusions are so specific to the system under study that it will garner attention by very few scientists. That doesn't mean that the paper should never be published (indeed, such apparent specificity might open new vistas of interesting biology, see ‘CRISPR’). Ultimately, it is the editor's job to make the call on this, but your advice can be helpful in the decision.

2. Evaluate the extent to which the data in the paper supports the conclusions. If they do, then explain it and suggest that it is ready for the community at large to have access to it. If not, explain why not. Perhaps the results are not robust, controls are missing, or the findings have been misinterpreted, and you can say that. You can suggest experiments, but the authors should be free to figure out how to fix the issues. Or, as in the case alluded to above, you could suggest how the conclusions should be modified.

3. You, as a reviewer, can suggest ways you think the paper can be improved (or ‘made more interesting’) but if you decided that the authors have indeed supported their conclusions with their data, any experiments you suggest here should be clearly stated as optional. You might desperately want to know what happens if they try something else but, if it will not change the conclusions, you should either do the experiment yourself or wait for a follow-up paper from the authors (or other scientists who actually get to finally read the paper). It does often happen that authors will similarly feel that the proposed experiment is indeed interesting and do it, but they should not have to.

In short, as a reviewer, it should be our goal to help authors get things right and publish a paper they can be proud of (“of which they can be proud,” thank you, Mrs. Rosenthal, my sixth-grade grammar teacher. Oh, and sorry, Mrs. Rosenthal, for probably getting your name wrong in other essays. Where was I? Oh, yes, helping). Our goal is not to obstruct, prevent, or destroy; we are here to help. When you write a review, please consider if this is one that, if you received it as the author, you would consider constructive.

By the way, one more thing might be worth mentioning. If you are the author who is addressing the reviews you have received, it is very likely that any data or text you provide in response to a criticism should be in the paper. A reviewer who is fairly doing their job (see above) is providing the perspective of someone reading your paper – any critical reader should get to see the same thing you wish to show to the reviewer. (There are exceptions, of course, but this holds in general). We often tend to reduce the review process to a two-way communication (or argument) between the authors and reviewers, as though all that matters is getting the paper through the gauntlet. But if the reviewers are honestly being constructive (see above), then whatever they would want to see, I would like to similarly see when I read your paper.

I know, I've written this (more or less) before. But we know that the current state of things, where it is rare that a paper is published during the year in which it was submitted, is untenable, harmful, and incredibly frustrating. As the poet, Arthur O'Shaughnessy, and the philosopher, Willy Wonka, said, “We are the music makers, and we are the dreamers of dreams.” We are the reviewers, and it is up to us to fix this.

Or not.

Oh, while I have been talking to you, our lovely, novel, exciting paper was rejected by the editors. Sigh. Moving on, once I've drowned myself in ‘tea’.