The unintended impact of impact factors



Dr. Mickey Marks of UPenn stopped by PSPG yesterday to discuss the San Francisco Declaration on Research Assessment (DORA) which calls for new metrics to determine the value of scientific contributions. The system in question is the Thomson Reuters’ Impact Factor (IF) which was developed in the 1970s to help libraries decide which journals to curate. Since then IF has taken on an inflated level of importance that can even influence promotional and hiring decisions.  But can a single number really summarize the value of a scientific publication?
IF is calculated by dividing the average number of citations by the number of citable articles a journal has published over the last two years. One reason Dr. Marks became involved in DORA is because he is co-editor at a journal whose IF had been steadily dropping over the last few years, a trend experienced by numerous other cell biology journals. This led many in the field to question whether IF was really accurate and useful. As you might imagine there are many factors that can skew IF one way or another: for example, in some fields papers are slower to catch on and might not start accumulating citations until well past the two year IF has been calculated. Journal editors can game the system by reducing the number of “citable” articles they publish: citable articles must be a certain length, so if a journal publishes many short articles they can decrease the denominator and inflate their IF. So how reliable is the IF system? Are journals with a high IF really presenting the best science? A few years ago the editors at one journal (Infection and Immunity) set out to address that very question and the answer may (or may not) surprise you. The editors found a strong correlation between IF and retractions (see graph).

Infect. Immun. October 2011 vol. 79 no. 10 3855-3859
          Why are these high impact journals forced to retract at such a higher rate? It might be because these editors are looking for sexy science (because that’s what sells) and might be willing to overlook sloppy research conduct to print an exciting story. Another reason may be that researchers are under extreme pressure to publish in these journals and are willing to omit inconsistent data and allow mistakes and even misconduct slip through to keep the story neat. And this brings us to the real problem presented by IF: in some circles individual researchers’ scientific contributions are being judged almost entirely on what journals they publish in. Scientists learn very early in their careers that if you want a faculty job you need to publish in Science, Nature and Cell. This is because it is faster and easier to draw conclusions based on a journal’s reputation rather than actually read an applicant’s publications.
There are a few alternatives to IF, including Eigenfactor and SCImago which are similar to IF but take into account the overall impact of each journal, and Google’s PageRank which ranks journals based on search engine results. These alternatives generally result in similar rankings to IF, however. The real issue isn’t the rankings themselves but how we as scientists use them. If the system is going to change it will have to start with us.  Scientists must decide together to de-emphasize impact factors and publication rankings when making decisions about promotions, hirings and grants.

Nicole Aiello
PSPG Communications

Comments

Popular posts from this blog

2015 AAAS Science and Technology Policy Forum Summary

AAAS Forum Take #2

Dr. Sarah Cavanaugh discusses biomedical research in her talk, "Homo sapiens: the ideal animal model"