It is ironic that, in an era known for the great speed and availability of information – where we could choose to blog our results rather than submit them to journals – publishing papers seems slower and more painful than ever before
I believe in the value of peer review, of having research articles vetted by experts who understand both the techniques involved and the scientific issues, who know what’s been done, what’s left to do, and what’s important, and who can offer that judgment critically, honestly, and constructively. If you were taking a vote on whether to keep peer review a prerequisite for publishing research, I’d be one of those who raised not one hand, but two. When I was a member of the Literature Selection Technical Review Committee, the group that advises the National Library of Medicine regarding what titles to index in MEDLINE (www.nlm.nih.gov/pubs/factsheets/jsel.html), I regularly gave lower scores to those journals that didn’t provide evidence of adequate, unbiased peer review. But I have to ask, on what do I base my beliefs, and what does the evidence support? Are my beliefs themselves grounded in scientific knowledge or are they simply long-standing traditions that I have faithfully adopted?
Peer review as a matter of course, however, is a relatively recent phenomenon. Most of us have read the 1953 publication by Watson and Crick on the structure of DNA (www.nature.com/nature/dna50/archive.html), perhaps the most famous biology paper published in our collective memories. Many of us also know that this paper was not peer-reviewed; Nature didn’t have a system of formal peer review in the 1950s, and papers were reviewed only when it seemed necessary (www.nature.com/nature/history/timeline_1950s.html). John Maddox, emeritus editor of Nature, wrote, “…the Watson and Crick paper was not peer-reviewed by Nature. I have two comments on this. First, the Crick and Watson paper could not have been refereed: its correctness is self-evident. No referee working in the field (Linus Pauling?) could have kept his mouth shut once he saw the structure. Second, it would have been entirely consistent with my predecessor L. J. F. Brimble’s way of working that Bragg’s commendation should have counted as a referee’s approval.” (Maddox, 2003).
I do not claim to have discovered anything even remotely on a par with the structure of DNA. That said, my first experience with publishing my own research surprised me. In 1985, Peter Walter and I submitted a research paper to the Journal of Cell Biology (JCB) (Siegel and Walter, 1985). We sent the paper to his graduate school mentor Günter Blobel, a handling editor of JCB. Two days after my manuscript was received, a fax from Günter arrived, saying something like, “Great science, beautifully written. Accept.” I remember feeling unduly disappointed – where was the peer review I had been promised? Was my paper doomed to obscurity and factual error because it hadn’t been assessed and improved by anonymous experts?
Because my memory is not what it used to be, I recently checked this story out with both Peter and Günter. Peter replied, “Honestly, I do not recall. But your memories ring true.” Günter had more to say: “The paper you mentioned goes back a very long time! But my policy as JCB editor then was to publish very good articles very fast, even to overrule reviewers, or not even ask any reviewers, when I felt that the paper was of high quality.”
There are several causes for concern in this story. By today’s ethic, Günter should never have handled my paper, since Peter had only recently left his lab. The paper was not reviewed – it went into the literature exactly as first submitted. Even so, who better to understand the science in my paper than Günter Blobel, upon whose work, along with Peter’s, the research was built? And the results in this paper have stood the test of time. The paper continues to be downloaded monthly and has been cited 136 times, according to the Web of Science [most recently in May 2008 in Cell (Lakkaraju et al., 2008)]. What can we say from this experience? The absence of ‘impartial’ (if such a thing could ever exist) peer review does not necessarily mean a paper is flawed.
The rest of my research papers were peer-reviewed. However, if I look back on my own experience as a researcher, can I say with any conviction that the journal’s peer-review process improved my manuscripts, kept unsound science from being published, or correctly assessed the significance of my work? I would have to say, and I wonder if your experience concurs, that the answer for the most part has been no. My papers have been accepted or rejected on the basis of the perceived significance of the work, but never because of some technical flaw. Not once was I asked to do another experiment, and my revisions were generally returned to the journal within 24 hours of receiving the initial decision letter. One could conclude that I was an unusual scientist, a scholar among scholars, but, while I love the self-flattery, I don’t think that was the case. I had a wonderful mentor and belonged to a vibrant and interactive research team and graduate program, but I don’t believe my experience was unusual for the time and I don’t remember it being significantly different from that of the other members of the lab or of my colleagues in graduate school. And even though the process of publishing was, by today’s standards, incredibly rapid and relatively painless, I was always upset by the reviews (or lack thereof!): the reviewers didn’t understand the paper properly, asked for information that was already there, took seriously other papers in the literature that were flawed, and so on.
When I compare my own experiences with those of researchers today, I am struck by how difficult it seems to publish research. It is ironic that, in an era known for the great speed and availability of information – where we could choose to blog our results rather than submit them to journals – publishing papers seems slower and more painful than ever before. I am not the first to suggest that publishing has become incredibly painful. As Martin Raff and colleagues wrote in a recent Letter to Science: “The stress associated with publishing experimental results – a process that can take as long as obtaining the results in the first place – can drain much of the joy from practicing science.” (Raff et al., 2008).
I have viewed this process from a slightly unusual angle, that of the professional editor, an exactor of pain. What I discovered when I started working for Cell in 1994 was that different fields treat their peers differently: some are incredibly encouraging and supportive, while others are mutually destructive. The reviews provided by the latter often seemed to say: “My last paper was rejected from Cell, and since it was obviously the most important thing in the field, nothing else in the field is appropriate for Cell, either. Q.E.D.” This heightened sense of competition undermines the entire field. In a recent e-mail exchange about this, Günter Blobel wrote, “The competition among scientists is so outrageous these days, that asking for more experiments is mostly used as a tool to slow down a competitor, either to gain time to publish competing results or in general, to not let the competitor get too far ahead.”
Well, that’s Günter’s theory, but there may be other explanations for why reviewers today always ask for more, to the point that a paper isn’t publishable unless it’s a tenure package worth of work, and that ‘more mechanism’ is needed until we get to atomic resolution, at which point the reviewers find something else to criticize. Here are some other explanations that spring to mind, several of which were also brought up by Raff et al.:
1. Authors send papers before they are ‘ready’ to be published – that is, without the benefit of critical, honest, and constructive feedback, which would ideally be obtained prior to submission.
2. Reviewers think it’s their job to take the paper to the ‘next level’, no matter how good it is when submitted.
3. Editors are shirking their own jobs as editors. Editors should define the standards of the journal and decide, based on those standards and the comments of the reviewers, what they should publish. Instead, editors often overextend the job description of the reviewers, asking them not only to comment on the strengths, weaknesses and potential impact of a paper, but also to serve as pseudo-editors, turning their impressions into a recommendation for publication in a particular journal.
I think it’s a bit of all these things, only made worse by a heightened sense of competition and the undue importance placed on the journals in which you publish, as highlighted by Raff et al.: “Sadly, career advancement can depend more on where you publish than what you publish. Consequently, authors are so keen to publish in these select journals that they are willing to carry out extra, time-consuming experiments suggested by referees, even when the results could strengthen the conclusions only marginally.” (Raff et al., 2008). I feel compelled to get on a soapbox about this. Scientific publishing has, as its first imperative, publishing, and if we’re working in a system that hinders rather than helps share our results with each other, then something has to change. In the rest of this editorial, I will address the above three possibilities and offer my observations and suggestions; we are also creating a blog (http://db.biologists.com/blog/) to gather your ideas and advice on the topic.
Point 1. Authors send papers before they are ‘ready’ to be published. One explanation for why my papers went through a journal’s peer-review process virtually unchanged is that I had plenty of feedback on my work before submission. In those days, and in my field, we regularly talked about our work long before we submitted it for publication, both locally at lab and faculty meetings, during long walks and over coffee, and more globally at Gordon Conferences and other scientific meetings. In all these venues, critical questions were raised, allowing plenty of time for them to be addressed. We also passed drafts of our papers to our colleagues, both members of our lab as well as people in other labs, for feedback prior to submission.
I spend a lot of time training researchers in the art of writing a biomedical research manuscript and strongly believe in the benefit of feedback. Indeed, when I’ve been able to check, the papers that have been most impressive on my first read as an editor had always been critically reviewed by others prior to submission (potential reviewers would often note that they had seen an earlier version of the paper and had been asked to comment). I encourage researchers to seek the honest and critical advice both of people who have a deep understanding of the work, who can function as pre-reviewers, and of those who are interested but not involved, who can function as pre-editors. At the end of this editorial, I append a ‘manuscript writing and feedback guide’, which I often use in manuscript writing workshops to help guide peer feedback.
Point 2. Reviewers think it’s their job to take the paper to the ‘next level’. While this certainly seems to be the case, I’m not exactly sure how we got here, as I have never seen it in any written guidelines for reviewers. Are we modeling the reviews that we, as researchers, have received? Are we applying ‘journal club’ thinking to reviewing? Do we feel that the more critical we are as reviewers, the more likely we are to be asked to join an editorial board? Or are we leveraging the power of reviewing to slow down our competition, as suggested by Blobel?
Reviewing a paper is not about demonstrating your own scientific mettle; if editors didn’t think you knew the field and the approaches within the paper well enough to help them understand its strengths and weaknesses, they wouldn’t have sent you the paper in the first place. At DMM, we ask reviewers to break the habit of automatically asking for more and to concentrate instead on whether the paper, in its current form, has sufficient data to justify the conclusions drawn, and whether those conclusions are likely to have an impact on disease research. If the paper satisfies these requirements, reviewers should provide an evaluation of the paper without recommending additional experiments; if it does not, reviewers are asked to provide suggestions for what could be done to strengthen the paper. Other journals have different expectations; before you write any review, you should read the instructions for reviewers. For example, in the instructions provided at PLoS Biology, reviewers are told: “the reviewer should provide the editors with as much information as possible. A review that clearly outlines reasons both for and against publication is therefore of as much or even more value as one that makes a direct recommendation.” (http://journals.plos.org/plosbiology/reviewer_guidelines.php). For more general guidelines for peer-reviewing, I refer you to the website of the Office of Research Integrity and a presentation prepared at Yale with their support (http://ori.hhs.gov/education/products/yale/preethics.ppt).
Point 3. Editors are shirking their own jobs as editors. A reviewer, someone who has a deep understanding of the field and of the techniques used in the paper, can provide guidance on the strengths and weaknesses of the work and on its potential impact, both to the field and even possibly beyond it. Turning these assessments into a recommendation for whether a particular journal should publish the paper requires both an understanding of, and a commitment to, that journal’s standards for publication. It also requires a sense of other decisions that are being made by the journal at the same time, and the ability to make an unbiased recommendation. That is a lot to ask of a reviewer. I know that when I was simultaneously the Editor of three journals at Cell Press (Cell, Molecular Cell and Developmental Cell), I often found it difficult to keep the standards of each journal in my head and to say with confidence that the decisions I made were always unbiased by the different pressures on the three journals – and that was my full-time job. How can I expect reviewers to hold tens of journals in their minds and offer clear, constructive, and unbiased recommendations?
I believe our best hope for fair and constructive decisions is to relieve reviewers of the responsibility to make recommendations for or against publication and to maintain a separate, much smaller pool of editors who can be dedicated to the journal and to its standards, and who can discuss the decision with full knowledge of other papers being considered by the journal. This would of course require editors to take responsibility for their decisions and not to hide behind the recommendations of anonymous reviewers.
If we ask reviewers to concentrate on the research, I think it will make the review itself more constructive. When we launched PLoS Biology, I noticed that reviewers often wrote that they were unable to make recommendations because they didn’t know what we wanted to publish, so they had to stick to the strengths and weaknesses of the paper, leaving the editors to decide whether to accept it. Interestingly, authors told us that the reviews from PLoS Biology were among the most constructive they had ever received, even if we decided against publishing the paper. I’ve heard this phenomenon echoed by editors at other new journals, leading me to imagine a day when peer review is uncoupled from journal selection, making the process essentially ‘journal blind’.
I believe in the value of peer review. I am not so sure it serves its best purpose after a manuscript has been submitted, when other issues and agendas color what ought to be unbiased feedback. I welcome your views, your stories, and your suggestions (http://db.biologists.com/blog/). Together, we can build a constructive and honest community of scholars that fulfills the promise of peer review by making it integral to the conversation of research, rather than simply a judge over it.
Identify the question being addressed.
Is the background provided relevant to the question?
Have the authors explained why the question is important?
Is the experimental approach described and justified?
Is the answer to the question stated?
Does the organization match that of the figures, and does everything build towards the answer to the question raised in the Introduction? Have the authors used multiple approaches or a single approach? How do the methods compare with established methods for accomplishing the same aim? Are they standard? State of the art? If not, why not? What are the common limitations of the methods chosen, and what has been done to overcome these limitations?
Are you satisfied that the experiments lead to the conclusions and the answer? Is it clear why the authors are moving from one experiment to the next? Is it clear whether alternative hypotheses have been tested/excluded?
Eliminate speculation and make sure the appropriate amount of space is allotted to each result.
Check paragraph organization, leading with research design followed by elaboration.
Are the methods provided in enough detail to understand what was done?
Does it start with the answer to the question being posed in the Introduction? Does it go beyond the results to discuss their impact?
Does it acknowledge limitations and conflicts with other data, or open questions?
Is speculation limited to a single level? Are there suggestions for how the work might develop and how this result might impact other fields or the more general questions within the field? Read the first paragraph of the Introduction and the last paragraph of the Discussion. Are they conceptually linked?
Are all the elements of the abstract (background, results/methods, and conclusions/significance) present? Is the writing clear and understandable?
Is the title declarative and specific? Does it clearly express what is in the paper? Is it free of field-specific jargon?