Hey there, it's Mole, alive and well, thankfully! If you're just joining us, I was about to crash my car, and in the moments before impact, I found that time had seemed to stop. Which gave me time to talk about impact, as in impact factors. And the crash? Well, I messed up my car's fender a bit, but I'm fine. They say it will take a day to fix, and I've already given them my credit card. So, no fancy dinners this week. Maybe I'll save some money and just read this big stack of journals for a while. But first, let's get back to the subject of ranking journals and the papers they publish based on impact factor.

As we discussed, we use the impact factor, the average citations per paper for the past two-year period, as a guide to where to publish our work and how to evaluate the work of others. Sometimes this can be blatant, as when an institution or agency simply totals the impact factors of the journals in which an individual researcher has published, and uses this as a guide to promotion and other desirable things. Often it is less obvious, as when we compare candidates for a position and note how many ‘high impact’ journals they have published in. We use impact factors to rate institutions and even nations (at least, in terms of the country's scientific ‘productivity.’)

There are other ways, of course, to employ metrics in evaluations for job offers or promotions, and while there are lots of things wrong with assigning numbers to people (‘I am Number Two, you are Number Six.’ Sorry, an obscure reference, but if you got it, I'll be seeing you), and some good alternatives, this is not what we're talking about here. We're talking about impact factors and how we use (and abuse) them. And we do – to decide what to read, where to publish, and when to hold a party (‘High Impact’ parties).

So, what's wrong with ranking journals (institutions, nations, continents, star systems, etc.) based on impact factor? First, and I think importantly, it penalizes journals that publish a large number of papers. This includes ‘society journals,’ sometimes called ‘organs,’ as in ‘The Journal of Concrete Shoes, the organ of the Society for Human Relations.’ These journals should, ideally, facilitate publication within a field of endeavor, encouraging society members to publish their findings for the betterment of the field rather than forcing authors to make their work ‘more interesting’ if they aim to publish it. (Look, there are some papers that are of specific interest to those in the field, and advance the field, but simply will not appeal to readers from outside that field – this should not be a barrier to publication.) But those ‘organs’ that do publish such papers (and there are several), suffer low impact factors, driving many potential authors away. Why? Because if I am a trainee, I need to at least try to publish in the highest impact journals I can so that I have a shot at future employment. And if I am a principal investigator (lab boss), I need to do the same so that grant reviewers will note that I have consistently published in good-quality journals (which reviewers generally equate with journal impact factors).

Secondly, journals ‘game’ impact factors, knowing that a prominently cited review article (or often, position papers authored by a large ‘Nomenclature Committee’ including nearly everyone with name recognition in a field) will raise the overall impact factor of the journal, and therefore they solicit such papers even when they really serve, at best, modest value for investigators. A plethora of ‘high impact’ review journals extend this idea, and we authors often find ourselves writing reviews, not because we think that our fields have reached a point where such a review will actually have functional significance, but because it will be published in such a high impact journal. (I am speaking here from experience, as I am currently writing several of these, despite having said all I have to say, for now, about the particular subject – and I'll keep doing so, to help my trainees get their numbers up. It is a vicious cycle.)

Third, there are many important areas of research that are highly specialized, and because there are not a large number of folks working in these areas, the associated journals do not have high impact. Rare diseases, such as childhood cancer and inborn errors of metabolism can fall into this category, as can extremely common diseases that affect economically challenged regions. Publication in such areas of research, however vital (and excellent), can penalize these dedicated researchers who therefore tend to publish in journals deemed ‘low impact.’

Finally, most of us feel excluded from the high impact journals, where editors (keeping their eye on that impact factor) create ever more difficult gauntlets for us to gain their attention and reviewers (who, too, feel excluded) throw up increasingly challenging road blocks to publication (saying, in principle, ‘I could have done these experiments, why should you get a high impact paper without a significant amount of suffering?’). Indeed, when I was the editor-in-chief of a journal (yeh, I did that), I was regularly encouraged by the publisher to ‘reject more papers.’ Impact factor in ranking journals is problematic, which is why we call it ‘IF’ (as in, ‘IF only we had something better.’)

So, what can we do about this? Hey, it's Mole, here, of course we're going to talk about it. (Stick with me a bit– it will get wonky and probably silly before we get to the real suggestion). The problem, as I see it, is not that we place a metric on journals to rank them. The alternative, that we just publish everything and let the readers sort it out, is that there are simply too many published papers to read (and we'll just come up with other metrics anyway, and they won't be any better). And it isn't really about citations, per se, since a citation is probably the best indication that a paper has actually contributed something worth, at least, a mention in my paper. The problem is how we calculate this metric, with the resulting issues we discussed. Can we come up with a metric that can satisfy the economic needs of publishers, without the opportunities for artifice and without artificial constraints?

Here are some of the things I thought about and excluded: giving authority for ranking to an existing scientific body, such as an Academy or an assembled Faculty; ranking by frequency of access (online downloads and subscriptions); no ranking at all (it just can't happen); ancestry (oldest journals win); and the ‘stair test’ (throwing journals down a set of stairs and seeing where they land). Journals have indeed tried to add metrics in an effort to out-maneuver (or at least, ameliorate the effects of) the IF. Some of these do seem to matter (average time from submission to acceptance), and others, not so much (average time from acceptance to publication – really, some of us like having a bit of time before an accepted paper becomes ‘oh, that was published, don't you have anything new?’). The IF rules.

The first problem to address, I think, is the ‘average’ part of IF. We all know that if a billionaire walks into a bar, everyone becomes, on average, very rich (even if she doesn't tip – as my Mother Mole says, “they didn't make their money by giving it away”). One citation-rich paper can buoy the IF of an otherwise pretty ordinary journal, and of course, the journals know that (see above). We have actually already addressed this issue with respect to individuals by applying another metric, the H index (or H factor). The H factor (as we talked about last time) is calculated as the maximum number of papers with the maximum number of citations published by an individual. There are lots of problems with the H index when it comes to ranking researchers (as well as with other, more complex metrics that seek to overcome these short falls). But for journals, I think it would work. If in a two-year period, a journal published seven papers that each were cited at least seven times (but not eight papers that had been cited eight times) its HI (Hi? High?) would be 7. Publishing large numbers of papers would neither help nor hurt a journal's HI (except, perhaps, by publishing more papers they might get lucky, which would not be a bad thing?). I think I would rather know the HI for journals rather than the IF. (I'm not sure it would matter, really, but it may at least reflect how often they publish papers that get noticed, and thereby cited.)

Next, we want to think about reviews, perspectives, opinions, position papers, etc. These can be valuable, but they should not be the major drivers of the metric. So, rate these separately. State a HI for primary research publications in the journal, and a second one for other types of papers (review journals would not have the first one, and journals that predominantly publish research papers would not feel pressure to publish reviews simply to increase their metrics). Or give a HI for research papers and an IF for others in the same journal. This way, I may want to write a review for a journal with a high IF, but I may not want to send my research paper to the same journal with a very low HI. This could work, maybe. HIs and IFs and Bears. Oh, my.

This is all feeling a bit silly. Metrics feel a bit silly. (I warned you!). All we want is to read papers in journals that feel like good journals, and to publish our papers where they will be noticed (and we'll get ‘points’ for such publications that will help our careers). We want to contribute, and to learn from each other. We want our work to be discussed.

Every lab should, and I hope does, participate in journal clubs. There are many thousands of labs in the world, and therefore, hopefully, thousands of journal clubs. So, what if we made a note of these? What if we had a list of all actual journal clubs, all over the world, and noted which papers they were presenting? In time, we would note that some journals seem to have the papers they publish discussed more frequently than do those in other journals. I know, there aren't enough journal clubs in all fields to cover all papers (no way – as I noted, I can't even look at all the papers in my own fields of study). But I definitely want to know what papers other labs are discussing – at least, I'll probably read them too. That is, until someone comes up with a JC metric, and figures out how to ‘game’ it.

There already is a sort of forum for this. It is called the Faculty of 1000. Researchers who are invited into it (and there seem to be way more than 1000) recommend papers they've looked at. It only ‘sort of’ works, but perhaps it is on the right track. It needs a bit of revamping, a way to search areas of investigation. It needs to be less clunky, and a bit more interactive. It needs an interface with PubMed. Or perhaps to be a part of PubMed. Or Google Scholar. Or … Wait, my car is ready (I don't even want to look at the bill). I'm going to carefully drive home, and then we can pick this up where we've left off.