Circling. Circling again. Landing. Slowly moving my hand. Impact! Goodbye, mosquito; hello, no mosquito bite. Sorry, I was distracted, but I'm sitting outside on a balmy summer day, and one less pesky mosquito will make today even better. If you are just joining us, we have been talking about Flatland. Not the Flatland of the wonderful little novella by Edwin Abbott Abbott (the author with the redundant last name) but rather our Flatland – you know, that two-dimensional space where our papers go out into the world. But E.A.A.'s Flatland is definitely worth a read. It was published in 1884, but I read it sometime after that. Hey, I'm old, but I'm not that old.
Last week, I had an extended discussion with the managing editor of a journal. Not the journal you are currently perusing, but another journal where once upon a time I published several of the papers that helped to launch my career (so I have a fondness for it). But the journal's editors are concerned that it is in trouble. This is something I hear a lot from a lot of editors of quite a few journals. There is trouble in Flatland.
For decades now, journals have been focused on their impact factors (IFs), striving to increase them. (I'll go into more detail about the IF in a bit, but we'll get there.) Some journals are seeing their IFs increase, but many more are seeing a marked reduction, and herein, apparently, lies the trouble. Because the IF of a journal influences whether researchers seek to publish in it, and authors depend on it when their work is evaluated. We all know about this. I am currently working hard on a gigantic grant application (not just me, but it's still a lot of work) that will go to a major government granting agency, and they want to know about the papers that were published in journals with IFs that are (arbitrarily, as near as I can tell) greater than 10. My own Molets worry about the IFs of any journal they might get to publish in, because this will influence their future job prospects.
During my discussion, the editor insisted that the solution to the problem was to “publish more reviews”, since (1) these are frequently cited (we'll see why that matters in a little while) and (2) the journal can ask that many of the references in the review are to papers published by the journal (ditto). This last part is particularly unacceptable, and I want to be clear that although there are journals that do this, the one you are reading does not (just to be clear). Publish more reviews, increase the IF: this is a strategy that journals have used for years, but it doesn't seem to be working as well as it used to. Maybe, I suggested, it is because this is so prevalent that we just don't read reviews anymore. (I just checked. According to PubMed, there are four hundred and fifty-five reviews on ‘tropical fish’. When I put in the search terms for the field in which I specialize, I got over seventy-seven thousand hits for reviews. There are a lot of reviews.) If the goal is to improve the journal (and not just the IF), soliciting reviews may not be the answer.
I suggested something different. The journal we were discussing has enormous value as a place to publish solid observations that might prove to be important (and history shows that a great many of these did, indeed, turn out to be important). But when I submit a paper to this journal, reviewers treat it as they do papers in journals with much higher ‘impact’ (i.e. IFs). If my Molet is going to spend a year addressing reviewers' concerns, we will likely end up sending the paper elsewhere (since my Molets are very exercised by journal IF, and I understand that). If, however, the editors of the journal were to restrict any revisions to those that they understand are truly required (a missing control, a repeat of an experiment to provide a larger n or a modification of a conclusion based on the available data), then I would be much more likely to send my paper there.
Long ago, in a university far away, I wasn't able to find papers by searching online. Actually, I had my own lab for ten years before the first online searches became available, and even then, the director of the institute in which I was working was hesitant to give us access to the internet that did not involve using a telephone modem. Okay, it was a long time ago. In those days I had to rely on three methods to search the literature. The first was Index Medicus, composed of huge volumes in which papers were listed by subject matter. Daily weight training was advisable in order to lift a volume covering three months (today, that would entail listing over fifty thousand citations; I just checked). The second method was a periodical called Current Contents, listing the table of contents for all the journals published in the previous month. The third was Citation Index. In this one (again, in massive volumes), you looked up a paper you knew about, and it listed all the more recent papers that had cited it. All of this searching happened in the library. I know many of you have never seen a library. It was a place that was full of books and journals, where I spent many happy hours each week to try to keep up. I miss the smell of the books (a roommate of mine when I was a Molet tried to make an incense that captured this smell, but the result left something to be desired).
Citation Index was created in 1963 by Eugene Garfield and his Institute for Scientific Information (which subsequently became the Web of Science). I didn't start using it until many years later (as I said, I'm old but not that old). As the numbers of journals and papers increased, Eugene created a metric to decide what journals to include, based on their “influence”. The idea was to count up the number of citations received in a given year to articles that were published in the two preceding years, and then divide this number by the number of articles the journal published in those two years. And, so, the IF was born. So, if a journal published a hundred papers between 2021 and 2022, and there were a thousand citations (in any journals, including the journal we're looking at) to these papers last year (2023), then its IF for last year is 10. Of course, if only one of those papers was cited a thousand times, and no other papers were cited at all, the IF would still be 10. In 1998, Eugene admitted that “in the hands of uninformed users, unfortunately, there is the definite potential for abuse”.
So, reviews. Reviews have historically been cited more than most papers, or at least some have. The reason for this is simple. Journals limit the number of citations we can include with our papers. We may want to cite all relevant articles and credit the original authors with their discoveries, but we just don't have the bandwidth. So, we find a review that cites them and cite it. I suspect that many authors do this without checking whether the review is any good. And journals, wanting to improve their IFs, sought more reviews (and more, and more). My friend, Prof. Seagull, just published his two hundred and twenty-first review (to be fair, I just put the finishing touches on my one hundred and sixtieth, but it isn't a contest. And yes, some of them are with Prof. Seagull). But as I said, this isn't working the way it used to.
IFs are an easily quantifiable metric, and I suspect that this is why we continue to use them. Recognizing the problems with IF, some journals have turned instead to the Altmetric Attention Score. This is a rapid scoring system that monitors all “mentions” of a given paper, including social media posts, mainstream media, policy documents and whatever else they can think of. It is updated much faster than citations or the IF are, but we know that if and when this becomes the way we decide whether a paper (or at some point, a journal) is worthy, along will come strategies to game this system as well. (Please don't let it come to a point where we have to hire ‘influencers’ to promote our papers.)
But what is the real goal here? As I see it, relying on IF to choose a journal to read or a job candidate to hire is just lazy. When we publish a paper, shouldn't we just want people to read it? I regularly tell the Molets that our overriding ambition is for our papers to be presented in journal clubs (but alas, I'm not sure that many labs bother with journal clubs anymore).
Of course, I have some ideas about what to do about all of this, the trouble in Flatland. But right now, I have to get back to that big, hairy grant and continue to share my day with the mosquitos. There's one now. And… Impact!