Oh no! I'm driving late at night, after a long day of ‘air travel’ (hurry, wait, empty all pockets and bags, wait, hurry, wait, wait, ‘enjoy your flight,’ fly, wait, wait, fly, wait, wait, walk, drive) and I'm about to smash my car into a darkened hazard in the road. Time has dilated, and everything went from fast to slow, and then slow to stopped. Which is how I am writing to you in the midst of this time bubble. Rather peaceful. But I know that this is not going to go on forever. The impact is coming.
Which puts me in mind of the whole issue of impact. I mean, the impact that affects all of the efforts in this thing we do, this biomedical research thing. The impact factors of the journals in which we struggle to publish our papers (in?). We complain about it (“It has no meaning!”) and then we extoll it (“This candidate has published several high impact papers!”). We see it as a problem (“Scientists are far too concerned with publishing in journals with nice soft pages or shiny, glossy ones.”) and then we do our best to submit our papers to those journals and do what we can to publish in them. Some of this is seen as sour grapes, or, perhaps more aptly, grapes that are out of our reach. (The wonderfully sardonic humorist, Ambrose Bierce, wrote, “A fox, seeing some sour grapes hanging within an inch of his nose, and being unwilling to admit that there was anything he would not eat, solemnly declared that they were out of his reach.”)
The journal impact factor, simply put, is an average of the number of times certain types of articles (research papers and reviews count, some other types of articles don't count), published within a given time frame, are cited by other papers over the course of a year. If a journal has published 100 qualified papers over the past two years, and these have received an aggregate 100 citations, the impact factor for the next year is “1.” Generally, these are given for a two-year span, updated yearly, but there is also a five-year version. Publishers push their editors to increase the impact factors of the journals they publish, as this directly affects sales of the journal. And we use the impact factors to decide how ‘important’ our publication records are (and we use them to decide how important publications of others are). We'll get back to that.
So, editors get to work raising the impact factor, by rejecting more papers and by publishing lots of review articles they hope will receive beaucoup citations. And to ensure that reviews will receive many citations, the journals often restrict the number of references we can put in our papers, effectively funneling our references towards such reviews (why cite five papers when you can cite only one?). I just checked; my most cited paper, by far, is a very outdated review article, published over 20 years ago, which received citations this year – I'm not proud of this, it's not a very good review.
Look, we know that the average citations a journal receives is not an indication of how important one particular paper is. We love to speculate that if a paper is completely wrong, it will receive a great many citations saying how wrong it is (actually, I don't think this is true; if it's wrong, we tend to just fuggetaboutit. Hey, Mother Mole is from Brooklyn. But, still, we love this idea.) Nevertheless, we actively respond to this metric in many ways, and invent related ones to apply to investigators. Maybe we love lists; there is a list of the ‘Most influential scientists of all time,’ based on the ‘h-factor,’ the number of published papers from an author that have been cited at least that number of times. As in, ‘Professor Hippo has 50 papers that were cited at least 50 times, so she has an h-factor of 50’. When her 51st paper is cited 51 times, as well as one more to each of her top 50, her h-factor will go up to 51. How exciting! Guess who is currently #8, near the top of the list? (Before you look it up, guess. Here's some hints: He isn't around anymore; he was Austrian, and he made a pretty good joke about cigars. He used to be #1, but he was usurped by Michel Foucault, who famously taught us that the word is the thing it names. Like, um, ‘most?’). Like all metrics, it doesn't tell us who is really most influential; it's just another score on the score card.
Eugene Garfield, who invented the impact factor long ago, and founded the Institute for Scientific Information, apparently never intended that it be used to ‘rate’ journals, institutions, or researchers. I've never met him, but my friend, Professor Mink, has spoken to him often, and relayed this impression. Mink says that Eugene feels that this has all gotten out of control.
So why do we genuflect before the powerful impact factor? Actually, it's pretty easy. When we evaluate applicants for a faculty position, we look at their publications, and those with papers in high impact journals (especially first author or author for correspondence papers, indicating the author's involvement) get moved to the top of the list. So those who would like such positions do whatever they can to publish in those journals. Sure, we could change that – we could say, “I don't care if this candidate has published so many high impact papers, I'd rather interview the candidate who has published two papers in the Journal of Spurious Results, because I would feel less threatened with this colleague.” Or something. We know that the gauntlet that must be run to squeeze out a high impact paper is really tough, and those who succeed might have the right stuff. It isn't a guarantee, of course – indeed, we have interviewed many such ‘high impact’ candidates without offering them positions – but it does make them competitive. And as long as it does (not only for jobs, but also for grants, awards, honors, etc.) we want to consider hiring them. So even if we don't like it, we venerate impact factor.
Recently, I had an open argument with Professor Otter, a prestigious and much honored member of our institution, who during a faculty meeting announced that our researchers were much too focused on publishing in Crosby, Stills, and Nash (substitute your favorite journals here, or your favorite folk rock trio if you prefer). He pointed out that there was nothing ‘wrong’ with publishing in ‘lower-tiered’ journals. I completely agreed, but noted that as long as my molelets wanted jobs in academia, I would do whatever I could to give them access to the journals of their desires. This is the system we have created, however flawed, but we will not change it by wishing it away. Besides, I argued, I want our papers to be presented in journal clubs, talked about, and read, and publishing in such journals does increase the odds that this will happen. Besides, I like CSN (and usually, Y).
I know what you're thinking (I'm in a magical time bubble and I seem to be able to read your mind). Why not just post all of our findings online and let people read what they want to? Here's what will happen, I think. First, I can't read all of the papers in my fields (I just checked, last year, there were over 70,000 publications in my areas of immediate interest). Second, I don't want to read even 1% of these (and how will I identify those I do want to read?). So, what will I do? I'll wait until there is a metric that tells me which ones I should look at. Hit rate? Download number? Some sort of rating system? Social media (please, no). Maybe I'll just wait to see which ones actually get cited – I should read those. But that means that I'll be way behind the curve, and worse, I'll never read papers in other areas that are important and interesting (and might very well ignite new ideas in my own work).
No, I'm not saying that I only read high-impact journals (really, most of the papers I read are not in the very ‘highest’), but I certainly look at them. I suspect you do, too. Some of these journals attained their status long before impact factor even started. So maybe I'll just stop complaining about how inappropriate impact factor really is. Maybe. But hey, I'm the Mole, you know I can't do that. Oh no, my time bubble seems to be decaying, and I seem to be heading for a crash. I'll be back – I hope!