Original artwork by Pete Jeffs - www.peterjeffsart.com

Original artwork by Pete Jeffs - www.peterjeffsart.com

Hi there! Another beautiful summer day, a bit cooler maybe, and somehow no mosquitos today! If you're just joining us, we have been talking about impact factors (IFs) and publishing papers, and the ongoing troubles in Flatland – that two-dimensional space in which our research finds readers. Many, maybe most, of us often rely on the IF of a journal to decide if that is the place to try to publish our work, because, well: (1) the perception is that a journal with a higher IF is more likely to be read, and (2) finding and subsequently keeping a job in the academic world seems to depend on publishing papers in journals with a sufficiently high IF. By ‘sufficiently high’, this depends on the institution. At my own, they only count a paper towards any sort of promotion or salary raise if it is in a journal with an IF of 10. Does this sound arbitrary?

If you are working in industry, none of this might matter. Then again, my friend Dr Silverback runs a robust discovery science program in a major pharmaceutical company, and he tells me that continuing to maintain his research very much depends on the IF of the journals in which he publishes. So maybe it does matter?

We know that the IF is flawed. Eugene Garfield, credited with creating the IF to identify journals with “influence”, later lamented that the system has “the definite potential for abuse”.

Journal editors scramble to increase or at least maintain their IFs, lest they disappoint their publishers, and they use every trick they can come up with to make this so. Last time, I talked about using review articles to increase the IF (because reviews tend to be cited more than primary research papers, although this might be on the decline).

An even sneakier trick is to organize papers from a ‘nomenclature committee’, composed of large numbers of authors in a field (although the paper is usually written by just one), in hopes that by establishing labels for things, they will receive a citation every time one of those labels is used. Finally, some of the more glamorous journals (the ones with nice, soft pages that are actually printed) publish editorial material – articles that gather citations but for some reason do not count in the denominator. They're essentially clocking up ‘free’ citations. The IF goes up and up, regardless of any change in the quality of the actual research papers.

Another approach I'm starting to see in some journals is a journal h-factor. This is different from the IF and, until recently, was only applicable to individual investigators. The idea is that you count how many papers you have that were cited x times, and x becomes your h-factor. If you have published, say, ten papers, but six of them were cited at least six times, then your h-factor is 6. If you've published 50 papers, but again, the largest number of papers cited the largest number of times is six or more (for example, two papers were cited 50 times, two were cited 33 times, two were cited six times and the other 47 were cited less than that), then your h-factor is still 6 (“You are Number Six”). I could go on about how little I like this reducing of our achievements to a number (“I am not a number, I am a free man!” [laughter]. Sorry, I was channeling Patrick McGoohan in The Prisoner).

With journals, the h-factor has some advantages over the IF. If The Journal of Stuff (J. Stuff) published 100 papers in the past two years, and last year one of these papers received 1000 citations while no other paper received any citations, then the IF of the journal is 10 (1000 citations divided by 100 papers). In contrast, the h-factor of the journal is 1. Similarly, if another journal, The Journal of Interesting Stuff (J. Int. Stuff), published 100 papers in the past two years, and last year 20 of these papers received 20 citations each and another 30 received five citations each, then the IF is 5.5 (550 total citations divided by 100 papers), while the h-factor is 20. If we are only going to rely on a number to decide to which journal we should submit our paper, those who rely on the IF would have to opt for J. Stuff and not J. Int. Stuff. I suspect that my Molets would prefer the latter.

If we really must put a number on journals' websites (and it appears we do), using h-factor rather than IF has another advantage. If two journals both received the same number of citations last year but one of them published twice as many papers, they could both have the same h-factor but the one with more papers will have a lower IF (by half). Journals that serve an organization (e.g. The Journal of the Society of Beekeepers) tend to publish many more papers than other journals, and thus lose out in the IF competition, even if they publish a great many highly impactful papers.

Indeed, it is often the case that a discovery that turns out to be extremely important, even paradigm shifting, is rejected from the journals with very high IFs and ends up being published in a ‘more specialized’ journal. This might be because the authors don't know ‘how it all works’, or are not prepared to spend the next year or two revising the paper based on the opinion of a reviewer (only to be ‘scooped’ by another group willing to get the work out in a journal with a lower IF).

Once upon a time, journals were not available in electronic form. They came by mail, printed on paper. If we found out about a paper but did not have access to the printed journal, we would mail a small postcard to the author and request a ‘reprint’, similarly printed on paper, which they would mail back to us. Authors had to buy these from the publishers, and our filing cabinets (for those who have never seen one, this is a metal construction meant to hold paper) were often overflowing with unmailed reprints of our papers (which we would usually gift to as many visitors as we could, who would then put them into their filing cabinets). This was a long time ago. There are still printed journals and even reprints (although I don't know the last time I purchased a set from a publisher). I do like to sit outside with a stack of printed journals, reading papers that I would never have identified in an online search, but again, I'm pretty old. And I have to admit, most of the papers I read are on a screen. I do not advocate us going back to the ‘getting journals and reprints in the mail’ approach. I do, however, advocate for reading papers, however one accesses them.

There is another problem with IFs and journals. I'm not sure how, but it seems that there are a lot of journals that publish rather flawed papers (I say “flawed” because I've read them and evaluated the presented data and the conclusions that supposedly derive from these data) that nevertheless have attained stunningly high IFs. To game the system to the extent that this implies has me falling into a rabbit hole of conspiracy theory, where I do not intend to go. The alternative is that, yes, the IFs of these journals are high but disconnected from their value to the advancement of science (since the publication of marginal results and overdrawn conclusions does not help anyone plan their next research exploration). But why should we assume that such value should correlate with IF? I have heard many times that a paper that is known to be wrong will be cited as often as one that is thought to be correct, but I doubt that this is true. What is apparently true is that papers with short titles are cited more often, and that having lots of keywords that are searchable tends to increase citations. But none of this explains why journals with objectively poor papers sometimes have very high IFs. (I'm not talking about those cases where we identify scientific issues with the conclusions published in a journal we consider ‘good’. I'm talking about journals that publish lots of such papers that nevertheless have a very high IF.)

But none of this is the point. We know that IFs are problematic, but we continue to rely on them when we consider the productivity of a researcher who is seeking a job or a promotion. Ay, there's the rub. (Hamlet was talking about much weightier issues when he made this quip, but I digress. “Really, Mole? We were waiting for a digression”. Yeh, yeh, I know, but we're doing Flatland, not Shakespeare.)

If we cannot rely on a journal's IF to tell us whether or not a particular paper in that journal is worthy of consideration for a job offer, promotion or salary increase (or any of the other things for which we use papers as currency), then where are we? Maybe it would be helpful to ask our friend A Square, when he was visited by A Sphere, who somehow changed (from A Square's perspective) from a point to lines of different sizes, seemingly impossible in Flatland: “Monster,” Square shrieked, “be thou juggler, enchanter, dream, or devil, no more will I endure thy mockeries… Either this is madness or it is Hell”. Sphere calmly replied, “It is neither… it is Knowledge”.

Maybe we need to step into a third dimension from Flatland, the dimension of knowledge (or Knowledge, if you prefer). Here are some ways (I'm Mole, so of course there's a list!).

1. Our institutions can stop using journal IFs to evaluate a researcher's papers.

This isn't especially hard. Applicants for a job or a promotion, or researchers reporting on their progress over the past year, can provide a sampling of their ‘greatest hits’. One or two (or maybe three) papers on which they would like to be evaluated. Those who make the decisions might then read the papers (or ask someone whose opinion they trust to do so) and make their evaluations based on the content of the papers, rather than where they were published. To force the issue, we might ask the researcher to provide a copy of each paper without identifying the journal. I know, it is easy to find them (and those who evaluate the papers will likely do so), but perhaps, in time, it will become more about what a researcher published rather than where.

2. Journals and their publishers can find another way to rate their content against other journals in a field.

I mentioned h-factors (not that I think this will catch on, really), but I think we can do better. Perhaps, each year (or more likely, every few years), a large sampling of established scientists could be asked to rank the journals in a particular field, based on how often they find papers they consider of value? Ideally, this would not be done by the journal itself, but rather by an independent agency (those who currently spend time assigning IFs, perhaps. Maybe?). Personally, I would more highly value knowing those journals in a field that are well regarded by experts than I would a flawed numeric. (Yes, it might still be a number, such as the percent of experts who regard the journal as providing valuable information.) I know, as long as journals continue to report their IFs, authors and institutions will continue to use them. But if a few brave journals stopped doing so (and instead stated on their websites that they were regarded in the top x percent of journals in the field according to independent assessment), other journals that are struggling with IF might follow.

3. Recognize that a citation is not a vote of confidence in a paper.

This is at the core of one of the problems with current metrics such as IFs, h-factors and every other metric that is out there. Papers are very often cited not because they are particularly good, but because they cover what the authors need for a statement in their papers. If others have cited them frequently, they may well come up first in a literature search, and thus be used again and again. One of my own most frequently cited papers is an old (and I have to say, outdated and generally bad) review, published over 25 years ago. Last year, it was cited 133 times. It should never be cited again, but it will; this year it has been cited 76 times so far (and I am not proud of this). The fact that this paper has been cited more than 12,000 times is absolutely not an indicator of its quality. And I wrote it, so I can say.

4. Focus on research programs, not individual papers.

This is harder, but important. Prof. Spiny Mouse (of the soft skin) has published many papers on the digestive processes of slime molds, not only providing insights into her own, relatively small field but also elucidating more general principles. The papers are published in highly ‘specialized’ journals with low IFs. Prof. Chipmunk has published several papers on a range of diverse subjects, no two in the same area, but in journals with a high IF. When asked about his program, he states that he likes to work on different things (basically, whatever his trainees come up with). They are both contributing to the advancement of the scientific enterprise. Do we promote Prof. Chipmunk over Prof. Spiny Mouse? Or do we look more closely at their programs and the bodies of work? I don't know the answer – it would depend on a number of factors. Prof. Chipmunk is extremely creative, thinking ‘outside the box’, and maybe that is his program. But Prof. Spiny Mouse continues to make important progress that is influencing fields well outside her own. See? We are looking at their programs, not their individual papers, and in doing so, we are doing both of them a service. I know: what about an applicant for a new position who has published a couple of very nice papers, each in a different area, resulting from a diverse training background? Easy – ask them to outline the program they envision. (I have found this to be very informative. Sometimes they outline a potentially very exciting program of research, and sometimes they can only plan to follow up on a single finding; either has a big impact on my recommendations going forward.)

5. Think about what your goals are.

Journals: is it really only about numbers of submissions and citations? Authors: is the goal to have your paper read (hopefully) or to just use it as currency for your career? Editors: are you looking for papers that, in your experience, are interesting, or are you more concerned about whether they will be cited? How can you make it less onerous for authors to publish their solid research findings without chasing a myriad of reviewers’ comments? If you can do this, you may well find that authors are happier to submit their papers to you. And publishers, I know that you worry about income and the ‘bottom line’, but really, why are you in the business of publishing scientific journals? There are journals that have been valued for many, many decades (even centuries); will you let a metric influence your editors about what papers they should publish?

There is trouble in Flatland, and we aren't going to fix it by chasing numbers.