graphic

Pop quiz: you're a postdoctoral fellow hoping to get his/her first faculty position, and interviews will start in a month. What is the most effective thing you can do in the next 30 days to increase your chances of getting hired?

Time's up. If you said, “Do more experiments,” then you get no points. Ditto for “Write a paper”, “Apply for a grant” or “Try to win an important science prize.” If you said, “Practice your talk,” I'd have to give you credit for a good thought, but I'd have hoped that you'd have been working on that for some time.

Actually, the best thing you could do would be to sit down and read 100 papers, really read them, know them well enough to discuss them, and cover a few different areas.

“Mole!” you cry. “No way can I read 100 papers in a mere 30 days!”

Oh yes you can. That's not even four a day. And think: you'd be chatting with different scientists in the department that has the opening you wish to fill, and each time you sit down with someone who tells you of their work you'll likely (much more likely than 100 papers ago) have something intelligent to say. “Wow,” they'll enthuse at the Faculty Search Committee, “That one is really interested in the work we all do – maybe we should offer a higher salary?” (Okay, I drifted off into fantasy, sorry.)

But that isn't what I wanted to talk about. I wanted to talk about bad papers, where they come from, and how they get that way.

Last time, if you recall, I was discussing `journal club', that uniquely scientific sport wherein a paper is chosen and we commence to tear it into little itty bits. Sometimes we conclude that it was pretty much terrific, but much more often we throw our hands to the sky and say “Please, can we get the same reviewers for our next paper?”

This journal club thing, it isn't really a club at all. I'm not even sure why we call it a club (which brings up images of secret handshakes and funny hats – okay, sometimes we wear them at ours, but most of the time we just discuss papers). And clubs are supposed to be fun. Journal club is essential for the training of our trainees, for the open discussion of ideas, and for the establishment of a set of standards we can apply to our own lab's research. But nobody likes to do it, right? We have to force ourselves. But then nobody said doing science is all about doing things you want to do – ever write a grant? Our journal club is every Tuesday, over lunch, and it turns into a mildly social gathering – only science types would consider it social, but we get the job done.

Every journal, at every level of desirability, publishes scads of bad papers. Some are just a little bit bad, the odd figure or two that doesn't cut it. Others are just whoppers, just out-and-out wrong. And there is no correlation between the journal and the chance that a paper will be a howler.

Most of the time, however, papers are mostly good. That's because, most of the time, months or even years of work went into the experiments that have been represented as the first few figures. The first author, at least, devoted a lot of time to this and wouldn't have done so if there didn't seem to be something right about it. And in this mostly good category, we can learn a lot, at least until we get to “Figure 7.”

In my lab's journal club, we have a saying: “Ignore Figure 7.” It might not actually be the seventh figure of the paper, but often it is. This is the experiment that was clearly put in at the request of the reviewers, who wouldn't let the paper go any further if this result weren't obtained. Rather than months or years of research, the data in this figure were generated in a few days, and everyone breathed a sigh of relief that it sort of worked – sort of, once, with a dose of wishful thinking. I do not suggest that it was faked – that would be totally unacceptable – but nobody blames the authors for presenting preliminary results at the request of a reviewer and editors who put time limits on the resubmission. If a paper seems mostly good, except for a figure that just seems too good to be true (and, based on your world view, might not be) or contains wobbly data, that figure is probably Figure 7.

Then there are those unfortunate papers that pay the bills, that validate one's time, even if nobody, especially the authors, actually believe them. The project started well, the student worked hard, and, as the work progressed, the strength of the data simply declined. Not because the student's abilities declined, but rather the opposite: as the student's abilities improved the experiments more clearly showed that the wished for results are not to be. However (and this is the truly unfortunate part) the time is up, and something has to be published, so sadly the work is submitted and all effort is made to `get it out' so that the student can graduate, or the postdoc can move on, or the grant can be resubmitted. Data are chosen that appear to make the case, but the results are simply not robust. There are a lot of these papers.

The trick, for the journal clubbers, is to find which bits we should believe and which bits are there for any of these other reasons. So how do we do that? Everyone who does this successfully has their own methods. Here are mine. You might even call this Mole's Guide to Reading a Paper.

First, if I'm familiar with the area (and that isn't always the case), I read the abstract to get an idea of what these guys want to conclude. Then I go right for the figures. I flip back and forth between the figures and their descriptions in the Results section, just to make sure I understand what they did and in case they've introduced some data into the text. Usually at that point I'm done, because I have enough information to know (a) whether the data support the conclusions (or some other conclusion) and (b) whether I need to go over any of this more carefully. If (b), then I may peruse the Discussion, to see if they have any insights I didn't think of. Easy.

Okay, it isn't that easy. You have to be fully familiar with the field, with the methods, with the types of data and experiments that would be necessary to reach a given conclusion. You have to think hard about whether or not there are alternative interpretations or missing controls. And you have to decide whether the experiments that were done really prove the points in a manner that allow you to integrate these results and their conclusions into your mindset.

And if I'm not familiar with the field (yes, I do read articles outside my field – lots and lots of them. But you don't have to, because we have enough really good scientists who do genuinely creative work and I don't need more competition), then I will read the Introduction first and the first paragraph of the Discussion, which hopefully restates the conclusions. Then I'll head into the figures and Results. In other words, I might actually have to read the paper.

So how do you get to the point that you're sufficiently versed in subject matter, techniques, and the nature of experimental design to let you go into a paper quickly and efficiently to determine the validity of a set of conclusions? How do you train yourself to discriminate good papers from mostly good papers from bad papers?

Journal club, that's how! And if you really want to learn, then actually read the paper and discuss it with your peers before you go into the club (and do all the secret handshakes and wear funny hats). You'll not only find that you get more out of it, but you'll be able to participate in the discussion, which is always more fun. Okay, journal club is not fun – let's say “less painful.”

If your lab doesn't do journal club, change things. Start. Bring your lunch and choose a paper. Discuss it – if not formally, then informally. Don't miss out on one of the most useful exercises you can do to further yourself as a scientist. There is a huge amount of literature out there, and not all of it is bad.

I bet you lunch that I can find the good stuff first.