The unintended impact of impact factors
Dr. Mickey Marks of UPenn stopped
by PSPG yesterday to discuss the San Francisco Declaration on Research
Assessment (DORA) which calls for new metrics to determine the value of
scientific contributions. The system in question is the Thomson Reuters’ Impact
Factor (IF) which was developed in the 1970s to help libraries decide which
journals to curate. Since then IF has taken on an inflated level of importance
that can even influence promotional and hiring decisions. But can a single number really summarize the
value of a scientific publication?
IF is calculated by dividing the
average number of citations by the number of citable articles a journal has published
over the last two years. One reason Dr. Marks became involved in DORA is
because he is co-editor at a journal whose IF had been steadily dropping over
the last few years, a trend experienced by numerous other cell biology
journals. This led many in the field to question whether IF was really accurate
and useful. As you might imagine there are many factors that can skew IF one
way or another: for example, in some fields papers are slower to catch on and
might not start accumulating citations until well past the two year IF has been
calculated. Journal editors can game the system by reducing the number of “citable”
articles they publish: citable articles must be a certain length, so if a
journal publishes many short articles they can decrease the denominator and
inflate their IF. So how reliable is the IF system? Are journals with a high IF
really presenting the best science? A few years ago the editors at one journal
(Infection and Immunity) set out to address that very question and the answer
may (or may not) surprise you. The editors found a strong correlation between
IF and retractions (see graph).
Infect. Immun. October 2011 vol. 79 no. 10 3855-3859
|
There are a few alternatives to IF,
including Eigenfactor and SCImago which are similar to IF but take into account
the overall impact of each journal, and Google’s PageRank which ranks journals based
on search engine results. These alternatives generally result in similar
rankings to IF, however. The real issue isn’t the rankings themselves but how
we as scientists use them. If the system is going to change it will have to
start with us. Scientists must decide
together to de-emphasize impact factors and publication rankings when making
decisions about promotions, hirings and grants.
Nicole Aiello
PSPG Communications
Comments
Post a Comment