Double drat. I thought I'd found the last page that I was missing from this terrific mystery novel I had been reading. I was just about to find out whodunit, and was left hanging. And when I thought I'd found it between two journals in my briefcase, it turned out to be a bill for some car repairs I had done a couple of weeks ago. Triple drat. I hope I paid this bill?
If you're just joining us, we'd been talking about the gap between the load of information that floods the scientific literature every week and the glacial pace of scientific advance the public sees (in terms of, for example, new cures). It isn't the first time we've talked about this (see Revolution! Parts I, II and III, for example), but stick with me, there's something else to consider.
I thought that, maybe, since I can't finish my mystery novel anyway, we could start at the very beginning. The whole way we do this ‘solving mysteries and reporting on them’ thing.
I reckon you already have a good idea how this works. We spend a lot of time (and a lot of semolians) working on a problem we think may be worth solving. Through trial and error (a lot of error) we probe the mystery, get some hints, go in the wrong direction, regroup, try another route and other approaches, and slowly we begin to get an idea about what is going on. We frame hypotheses on the basis of our findings and test them, and if our ideas seem to hold up, we reach a point where we say, “Hey, we should share this,” and draft a paper. We circulate it among our colleagues and, at some point, we submit. And wait to see whether (a) anyone thinks it's as interesting as we do and, (b) what we might have missed.
But as we all know, we are very far from sharing our findings at this point. Reviewers and editors return to us with lists, each added to that of the others, of what we should do to ‘improve’ the paper. Don't get me wrong, a lot of time the suggestions are really good. But usually? They amount to “Do more.” We know this is coming. Indeed, I bet you've submitted papers with the idea of ‘Let's see what else the reviewers will want.’ Indeed, we often hold things back, given the simple fact that, if we show them everything, they'll still want more. In fact, even the friendliest reviewer will ask for additional experiments, if only to be taken seriously by the editors. So we do everything, fearing that we will disappoint a reviewer who may not particularly care whether we do it or not, taking much more time (and many more semolians) and ultimately filling up pages of on-line supplemental data that few will see (and fewer still will take seriously).
I've complained about one aspect of this process already (see, Thumbs up, thumbs down: The change from, “Let's see what happens when we test this idea,” to “If you get the following result from an experiment you haven't performed, we'll consider publishing your paper.” I suggest that this is not good science but, rather, negotiation that – at the very least – encourages quickly done, poorly evaluated experiments and – at the worst – encourages selective data presentation (actually, we can imagine much worse, and so can you).
But aside from this, there is a problem with our system that relates to our issue of the disconnect between the volumes of published information and the glacial pace of the sort of progress the public is expecting. Because while you are doing months of experiments to determine whether the temperature of the water bath and the day of the week matter, the rest of us in the field are busy doing the same thing for our own revisions. Yes, there are gobs and gobs of information that appear every day, but huge amounts of it are busy work. While we are spending time and money on the whims of a reviewer – who very likely evaluated the paper quickly, being occupied doing other important things, such as trying to do their own research and catching up on ‘Game of Thrones’, and who thought up these additional projects for us while distracted (hey, we're all anxious about the Lannisters). It isn't the best way to thoughtfully go about our science.
But there's much more. Because while we're all involved in working on such things we're also not disseminating genuinely useful information, the sort of observations that might drive the field if others knew about it. Indeed, some of the things that the reviewers had thought about might occur to others, who will want to know the answer – and the more people work on it, the faster the important things might actually get done. No, not the “Does this work on Tuesdays?” sort of questions, but those that really concern us in the field. And, more to the point, the more quickly the information gets out there, the more quickly we'll know whether it ‘has legs’ – if it is, indeed, useful.
I don't think that just getting more information out more quickly is going to solve the problem of generating cures. But it will redirect our most valuable resources to things that might matter.
I don't think that reviewers (most anyway) make their lists of requests from a position of “they did it to me so I'm doing it to you.” At least, I hope not. It comes from something else. We want our literature to be composed of mystery stories in which the solution comes with all of the loose ends neatly tied up – at least, we want them to read that way. But this isn't the way it works in most cases. Every set of findings, if they are interesting, elicit new questions, and if they are actually important, more than one individual lab should be asking them. Instead, we labor to write the last page of the mystery.
But maybe that isn't the point. Maybe it should be enough to get information out that is solid, addresses an interesting question and provides insights into what the answer might be. Yes, journals can demand a certain level of interest in the conclusion (this is how they get their readership and, indeed, why we read some journals more than others). But maybe we can live without having that last page. Or letting someone else write it? We never have the whole story. But we often have a good piece of it.
I know that this isn't likely to happen. I know that we restrict access to publishing (through the lengthy review process and by rounds of rejection and resubmission) because for the business of science, our publications are the route to personal success. It is how we are evaluated for grants, promotions, prizes, honors and the like. It is our currency. It's how we work.
But I think that in all of this we've lost sight of what the goal was supposed to be. What papers are actually for. It isn't in our job description (not mine, anyway) to make what I've found work towards something the public will value. And note that ‘value’ can take many forms (not only ‘cures’). For many of us, it is about the citation, the notch on the gun-barrel. The ‘points’ not the contribution. And our system is set up to enforce that belief.
So here's my question. What if, when we submitted a paper, the first time we submitted it, the work was made publicly available? Not ‘published’ but curated to ensure that, yes, it's a scientific paper, and it is now in the public domain. Others would know, yes (horrors!), but some might actually use the information to push things forward, ages before you'd get the credit in terms of which journal eventually published it. Of course you wouldn't do this – expose yourself in this way? But what if another lab, working on the same problem, did? Maybe you would have to, just to prove to the community that you, too, were making similar, important progress. Hey, it might even be posted in a place where some journals might ‘shop’ for interesting manuscripts and perhaps convince you to submit your work to them? You would get credit because people would see it. And you would be evaluated by the entire community – now – not only by a couple of reviewers who can find the time to do it.
Is this crazy? Fields outside of biomedical research do this already (physics and mathematics in particular). And within our domain, information is being made publicly available prior to publication, in the form of talks. Indeed, I do this all the time – I view it as a way to let others know that “hey, I'm working on this – talk to me about it and let's see if there's something we can do to push it forward together.”
Let's pretend that this posting of submitted manuscripts were something we did, a way to tell the community that we have work we feel is worthy of submission. We submit our work for review, but then also submit our work on such an on-line repository. So in addition to reviewers (and editors) seeing it, everyone can. But Mole! You say, my competitors will rush to submit their work, and they might beat me! Hey, that could happen. But if everyone did this, you'd know. And if they didn't, we'd know that you were first (if that actually matters, really). And maybe, instead, people would let you know that they, too, had been spending a huge amount of time and semolians doing similar work, and maybe everyone could get a bit of credit? More to the point, though, you'd be standing behind the quality of your work, prior to review. (And in a perfect world, which isn't ours of course, the work could even be cited, because, hey, it's in the public domain.)
It would take courage or, maybe, it might be something that the agencies that fund our endeavors may insist on? Maybe they might feel that the ‘points’ we get for publishing something eventually is less important to the process they are funding than the value of the information? What then? I think I could live with it. At least, when folks ask me what I'm working on, I can point to it and say, “I'm still trying to get this past the reviewers, but have a look because I think it's really nice.”
Okay, I know I'm dreaming. But then again, I may never know the end of the mystery story I've been reading, either. Actually, I don't want to know – I've figured it out myself, and I bet my solution is way better than whatever was on that missing last page…