Hey there! If you are just joining us, we have been talking about Flatland. You know, the flat pages in which we strive to publish (either on paper or, more likely, on a flat screen). And by ‘strive’, I mean slog, labor, plod, toil, sweat, trudge and stumble. One day, we have an idea and a way to test it. A bunch of experiments later, it seems that there might be something to it. So, we develop the idea, do many more experiments, revise, rethink and finally get to the point that we can support an interesting conclusion, and we write the paper. And rewrite it. Do a few more experiments. Put everything together and submit it. And perhaps some journal, somewhere, says yes, we will have it reviewed (yay!). And then we wait, knowing that it is a terrific paper and the reviewers are sure to like it, even love it. They may want a bit more, but still, it's really great. Finally, a few weeks later (hopefully), we're told that we should send it somewhere else, but here are the reviews that dismantled any and every thing we showed. We cry a bit, rail a bit and reformat the paper for another journal, where it might find a home. We wait, and the reviews are not as bad. Sure, we have to do more work. Actually, a lot more work. Months later, we revise the paper, double check the reviewers' comments, write a careful response and send it back. A few weeks later, two reviewers are okay with it, but one wants more. So, a few weeks (months?) later, we send it back. Hopefully we have worn down the reviewer, or at least the editor, and they agree that yes, they will ‘publish’ the paper (usually this means put it online). That is, we will be allowed to let others read what we have done. We pay a few (quite a few) thousands and, sometime later, the paper appears in Flatland. And we say, “Hey, did you see? We finally got that freakin’ paper published”. Everyone is happy. Aren't we?
Not me. Probably not you. There is trouble in Flatland. Last time, we talked about a few reasons this might be so. Reviewers are overwhelmed (good reviewers, even more so). Reviewers are cranky (all of those who review my papers certainly are). And it feels like even when we finally get a paper published, nobody sees it. Or if they do, they don't read it. Papers have become currency. Oh, they always were to some extent; we exchange publications for goods and services in the form of grants, jobs, promotions and even trips to nice meetings (or, like me, meetings held in small colleges on summer holidays, with cold showers and cafeteria food. But hey, at least I got to see a part of the country with more mosquitos than where I live. Sorry, it sounds like I'm complaining – I'm hugely thankful to get to speak at a meeting and share ideas with my wonderful colleagues, who have become good friends, over a late cup of ‘tea’).
This is a funny thing we do, this biomedical research thing (I guess it is what you do too; if not, thanks for reading this instead of doing something much more important!). We obtain grants to give us the money to do research (and get paid enough to have a place to live), then we write up our results and submit them to journals, where reviewers work for free to evaluate our work (and send us back to do more), and then we pay the journals to publish the work if we are lucky enough to have it accepted. We know that the journals make a great deal of money doing this (much, much more than we get to do the work), but we have to publish with them, otherwise we don't get the grants to do it all again. When I say ‘journals’, I mean the companies that own the journals; the people who actually do the work don't make nearly as much as they should.
And each time we do this – submit a paper for evaluation and potential publication – Cerberus, the three-headed dog who sits at the gates of Flatland, says “You may not pass until I say you can pass. And I may never say you can pass. Satisfy me”. Yes, Cerberus is a reviewer. But Cerberus is also us. As the wonderful cartoonist Walt Kelly once said, “We have met the enemy, and he is us”. We are the ones who stand between authors and all of the rest of us who might want to see the paper. And it is we who intone “You shall not pass”. How did we get here? (I know, I'm making you think of Gandalf the Grey, but he was really nice; so no, I'm channeling Cerberus here.)
Last time, I argued (you are certainly free to disagree, but I'm right – just kidding) that as fewer and fewer papers are read, they more closely resemble currency. And as currency, they tend to submit to the principles of economics. Simply put, your paper has value that I, as a reviewer, recognize, and I will make you work for that value. Because this is what you (or someone like you) does to me when I submit a paper.
By value, I am not referring to scientific value but rather the ‘market value’ a paper may have in terms of helping us get funding, get jobs, get promoted or get recognized (as in ‘famous’, although being famous as a scientist is less famous than, say, the King of Suede, who advertises on benches around my town. As my dear mother used to say to me, “If you're famous, how come I haven't heard of you?”). Market value is just that, it is ‘what the market will bear’. This, I suggest, is behind the slow-motion collapse of the impact factor (IF) system for journals. Those in a position to decide such things as promotions or job offers also decide which journals they will recognize as worthy of their attention, and therefore the ‘value’ of our papers. Meanwhile, I think that the IF system is being gamed (manipulated); compare IFs of journals across sites (and on the journal sites) and you will find a lot of disparities. You'll also find a number of journals with astonishingly high IFs, given their content. Recently, I sat on a graduate student's committee as she proposed a very questionable line of research based on extremely dubious data she had found in a paper and presented to us. When I suggested that these data were uninterpretable given the very small sample size and methods used, she responded that the journal had an IF in the high 30s (suggesting that regardless of my opinion, it was worth her attention). It was a teachable moment.
Last week, at a meeting (yes, with mosquitos and cold showers, but also a very good meeting), I was speaking with my friend Professor Fisher (sometimes referred to as Fisher Cat, but although of the order Carnivora, he is not of the genus Felis. He does have a rather loud scream, which he finds funny). Prof. Fisher is not only a terrific scientist, he is also editor-in-chief of a well-established international journal in our field. He confirmed what several of us who have worked as editors on journals have also noticed: it has become very, very hard to enlist reviewers. Much more difficult than it was only a few years ago. The entire enterprise of publishing is in some peril of perishing.
You might think that is a good thing. Let's get rid of journals and just post our papers. There are good preprint servers that will take them. I just checked: last year, one million, six hundred and ninety thousand, six hundred and sixty papers were published (according to PubMed). If you spent ten seconds just glancing at each one for eight hours a day, it would take more than two years to go through them. During which time, at least twice that many would be published. No problem. We'll use social media to draw attention to them. Or find them with search engines. Maybe we'll use AI. Sure. That will work. Prof. Fisher and I concluded that we need to find a way to fix the way we review and publish papers, as just posting them online is not a viable solution.
So, Prof. Fisher proposed an interesting idea, and after a lot of discussion, here it is. First, let's assume that the larger agencies that support biomedical science want to continue to do so. Let's also assume that the publishers who take our papers agree that solving this problem is worthwhile. We also assume that these publishers are not particularly interested in spending any money themselves, which is probably a safe assumption. In most countries, the larger agencies also have a role in providing salary support for scientists (trainees, technicians and, in my country, me). So, what if there was salary support for reviewers? What if, say, I could apply to my funding agency to be paid to spend one day a week reviewing papers (and we can decide how many papers and reviews this translates into). To do this, I would have to obtain agreement from at least a few journals that they want me to review papers. Perhaps they will first ask me to review some papers for them to see how I do. Standards can be set. I apply, and yes, I receive salary support for one day a week throughout the year (or whatever I can get). Now, the journals know this and therefore assign me papers, and I have to do them. I have to report back to the agency that I am reviewing papers. If I am late or do a poor job, the journals stop asking me, and I lose the salary support. The journals say “OMG, we have a stable pool of reviewers we can rely on”. The reviewers say “OMG, I am actually getting paid to do this”. And the authors say “OMG, my paper was actually reviewed in a couple of weeks, and the reviews were fair”. And maybe we say “OMG, Fisher, what a good idea”.
We can do the math. It wouldn't be a ridiculous amount of money. Maybe the richer publishers will kick in some (unlikely, but who knows?). Maybe some scientists will say, hey, I would still review for free, which would be fine (they probably have enough to fully support their salaries, but we're not all in that position). But many of us are having trouble obtaining funds to completely support our positions, and having an alternative source of income would be welcome. We could be part-time professional reviewers, contributing to the scientific enterprise without taking time away from what we are paid to do, because, well, we would be paid to do it.
And maybe editors will be able to push back a bit on reviewers and ask us to do the job we are supposed to be doing. What job? This: (1) determine whether the conclusions of the paper are of interest (i.e. advance the field) and (2) determine whether the presented data support the conclusions. That's all. It is not our job to think of lots more experiments to do, demand alternative approaches or draw additional conclusions requiring more validation. We are not supposed to be the three-headed dog who must be satisfied before anyone else can read it. We're just supposed to say whether or not the data support the conclusions and if not, precisely why not.
I know, I know, this is all a fantasy, but maybe it is the start of a discussion. And yes, I also know that the three-headed dog in ‘Harry Potter and the Sorcerer's Stone’ (or if you prefer, ‘the Philosopher's Stone’) was named Fluffy by Hagrid. He was still terrifying. Harry lulled him to sleep with music, as did Orpheus (same dog, different story). I wonder if the journals will let us submit a soundtrack for reviewers?