I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhauser gate. All those moments will be lost in time… like tears in rain…
And then Blatty says, “Time to die” and he dies. In the pouring rain. Oh man. For those of you just joining us, I've been watching Blade Runner, again. And we've been talking about replicants. Not the artificial humans in the movie, but published results that can't be replicated. Replicants. Get it?
There is a growing perception that many, or even most of the claims made in the literature cannot be replicated. While this isn't a new problem (the Greeks complained about this I think, or they should have – I mean, putting golden crowns in bathtubs isn't something you can try to repeat in your hut), the tried and true approach of letting the community work things out over time just isn't fast enough in our modern, I-need-it-yesterday world. We need to know, up front, what is worth putting our time into and what isn't, so that we can move forward, and now.
Very serious people have been thinking about this seriously. Seriously they have. Does that sound a bit frivolous? Perhaps it should. In general, whenever I've noticed an old problem (such as this) suddenly coming to the fore as urgent, there is a motivation that boils down to cold, hard cash behind it. And that's exactly what's going on here: as our governments embrace austerity (so that we'll have enough money to bail out the ultra-rich the next time they fumble the ball, again) the amount of money available for research is shrinking (or at least not growing). So of course we point out how, well, stupid this is, since modern, first world economies are largely fueled by scientific discoveries. This leads to push-back, noting that, well, no, modern, first world economies are largely fueled by moving money (in the form of electrons) around, and besides, the ‘science’ we do doesn't lead to discoveries anyway, because the findings are mostly replicants.
Maybe I'm being paranoid. (That said, having grown up in the days of hippie-dom, I am reminded that just because you're paranoid doesn't mean that they're not out to get you. Illegitimi non carborundum!). But it may be a useful exercise to examine the quite serious recommendations of these serious folks, as some of them are pretty reasonable. But as we will see, some are not. Let's look at them one by one.
1. We need to train our people better. See, right off the bat I completely agree with this. Once upon a time, some, or even most, of the students who entered a Ph.D. program did not come out the other side as Ph.Ds. And once upon a time this was viewed as a good thing, because there weren't very many jobs for Ph.D. scientists and those who were allowed to continue were those deemed most likely to succeed. But there are a lot of reasons why this practice hasn't continued. One of them is that programs competed for money and a metric was needed to determine which ones were most successful, and of course the simplest metric was the number of entering students who completed the program. So naturally everyone had to succeed. But more than that, the whole reason to have students was to have hands in the lab, and spending time training and testing students is now considered more or less a waste of invaluable bench time. An important reason for this comes next. Personally, I think it is essential to drill into trainees the idea that what they publish has to stand the tests of time if they want to ultimately be successful. If they have doubts, the work should not be published. Yeh, I know, good luck with that. So while I agree that we need to train people much more rigorously, the system itself would have to change. This is because:
2. We need to remove the rewards for publishing results regardless of their validity. OMG, I'm agreeing again! That almost never happens. Yes, it would be lovely if we could find a way to do just that, reward work that is validated and moves the field forward, and not just anything that makes its way into the literature. But how do we do this? The approach we have generally taken is to use a metric: How often has a finding been cited? If it is cited a lot, it is important. And if it has been cited a lot, it may even be a ‘landmark’. As we saw last time, though, many ‘landmark’ findings of this sort apparently cannot be replicated – they are replicants – and this invalidates the entire approach. So we should hold off on the decision of the validity of a finding until it has been deemed useful, right? And how do we do this? One idea is:
3. We need to make it is easier to publish negative results. Holey Moley! Again I agree! This could turn out to be a record. Well, I agree for the most part. There are really two sorts of negative result. The first sort is a controlled negative result. It may not be absolutely, positively, negative, but it is a result that convincingly shows that the conclusions of a study are not validly supported by another study. We really should know about such results. For example, a clinical trial is conducted that shows that a particular drug has no efficacy in a patient population – often a company that conducts such a study feels no obligation to make this finding public (I suppose it could hurt their stock value, or they might be afraid that it would). It would be pretty easy, though, for regulatory agencies to insist that results of all trials be made public as a condition of filing with the agency in the first place, and this would be invaluable for other researchers (corporate or academic) in furthering the efforts. Or as another example: attempts by another lab to rigorously test the conclusions of a study produce clear-cut results that cast doubt on the validity of a result. Generally, journals find such studies, however rigorous, not very interesting (indeed, the journals that published the original finding generally feel that to publish the counter study would cast them in a bad light, and often demur). Sure, we can say that the journal is obligated to publish solid evidence that another paper's conclusions in the same journal may be wrong, but journals don't actually have to do anything we say. So, in general, negative results are either not published, or are relegated to much lower impact journals. But there really is something we can do about this: cite them. The more we cite counter-examples in our own papers (even focusing our citation on the one we think is the right one), the value of publishing negative results will rise, and such increased citation will increase the demand for such publications. So rather than saying that Bee et al. say that it's so but Wasp et al. say it isn't, we could just say that Wasp et al. tested an idea but found it lacking (why give Bee et al. credit for saying something we think is wrong?). All useful to think about. But as serious minds have noted, we just don't have time to check out every little thing before we proceed to develop the research further. And we certainly cannot hold up promotions and grants and publications (sorry, stop, reverse that, as Willy Wonka would say) awaiting validation of important results. While it is a lovely idea to untether decisions regarding scientific achievement and its rewards from the status of the publication, we have no such mechanism. So, the pundits suggest:
4. We need to devise ways to punish publication of invalid results. Oh boy. I can see how this might be appealing to some. Currently, the system rewards publication itself, with no penalty for publishing something that is simply wrong. If, on the other hand, publishing something that others cannot replicate carried with it certain penalties (loss of funding, loss of prestige, being forced to wear a big loser ‘L’ on your head), we would make super-sure that what we published was right, right, right. Right? To some extent, many of us already link our personal value as scientists to the notion that what we report is correct, and we'll do anything to show others that we have it right. Whole careers have been built on working to support an idea that takes years to convince the community of its validity. When Darwin reported his observations that supported the concept of natural selection, he was assailed with counter-examples that might discredit the idea. Arguably (and this is an argument made, not by me, but by Stephen J. Gould), Darwin spent the rest of his career building the case for natural selection as the basis for the diversity of life. So, I put it to you, esteemed reader (sorry, when I think of such things, I go all nineteenth century), should we have cut Darwin off at the first sign that his idea might not be correct? Yeh, that would have been bitchin’. (Okay, back to the twenty-first century – way better.) Clearly, the problem here is this: who gets to decide that something is correct, or not? Which leads to:
5. Take this out of the hands of the scientific community, who routinely publish replicants. Now we're getting seriously serious. Indeed, this has not only been suggested, but pushed – there should be impartial groups that have the mandate to test important findings to determine if they can be replicated. Perhaps these can even be companies that do this for a living. Surprise! At least one such company already exists – and bigger surprise!! – the company's CEO is one of the people pushing for such validation as a requirement for obtaining future funds. How could anything go wrong? I really, really hope that you find this idea as utterly awful as I do. If you don't, let's talk about it more. If you do, well, stick with me anyway, because you already agree and I need all the help I can get.
Fortunately, a proposed plan to require independent validation of all preliminary results as a prerequisite for government support of research died an early death. But the idea persists. And this is, in part, because we just don't know what to do. Hey, this is Mole here. Of course, I have some more ideas. But my reason for wanting to do something is not based on the need to eradicate the literature of things I don't agree with. It's based on what Blatty said. Because if we cannot publish what we have seen, even if someone else hasn't seen the same thing, then something that may be genuinely valuable may simply be gone. Lost in time. Like tears in the rain. And I don't want to see that happen.
Oh, now I've gotten all weepy again. I'm going to go watch Who Framed Roger Rabbit.