We can do better than the Impact Factor

doralogo-smallIn an Open Access Q&A earlier this week, Peter Suber made the case that the Impact Factor is not a good way to assess research quality, particularly in the context of tenure and promotion decision making. He argued for those in the discipline to actually read the articles.
The San Francisco Declaration on Research Assessment includes a good explanation of the IF flaws and suggests abandonment of it as a measure of scholarship quality.

The Journal Impact Factor is frequently used as the primary parameter with which to compare the scientific output of individuals and institutions. The Journal Impact Factor, as calculated by Thomson Reuters, was originally created as a tool to help librarians identify journals to purchase, not as a measure of the scientific quality of research in an article. With that in mind, it is critical to understand that the Journal Impact Factor has a number of well-documented deficiencies as a tool for research assessment. These limitations include: A) citation distributions within journals are highly skewed [1–3]; B) the properties of the Journal Impact Factor are field-specific: it is a composite of multiple, highly diverse article types, including primary research papers and reviews [1, 4]; C) Journal Impact Factors can be manipulated (or “gamed”) by editorial policy [5]; and D) data used to calculate the Journal Impact Factors are neither transparent nor openly available to the public [4, 6, 7].

….

A number of themes run through these recommendations:

  • The need to eliminate the use of journal-based metrics, such as Journal Impact Factors, in funding, appointment, and promotion considerations;

  • The need to assess research on its own merits rather than on the basis of the journal in which the research is published; and

  • The need to capitalize on the opportunities provided by online publication (such as relaxing unnecessary limits on the number of words, figures, and references in articles, and exploring new indicators of significance and impact).

 

Better Ways to Evaluate Research

SPARC Europe logo

This recent SPARC Europe briefing paper tackles the problems with current methods of evaluating research (including the impact factor and h-index) and proposes some future directions:

The most striking aspect of the recent series of Royal Society meetings on the Future of Scholarly  Scientific Communication was that almost every discussion returned to the same core issue: how  researchers are evaluated for the purposes of recruitment, promotion, tenure and grants. Every  problem that was discussed – the disproportionate influence of brand-name journals, failure to move to  more efficient models of peer-review, sensationalism of reporting, lack of replicability, under-population of data repositories, prevalence of fraud – was traced back to the issue of how we assess  works and their authors.

It is no exaggeration to say that improving assessment is literally the most important challenge facing academia. Everything else follows from it. As shown later in this paper, it is possible improve on the present state of the art.

Megajournals and Open Access

An article in yesterday’s Chronicle of Higher Education provides some analysis of the megajournal PLOS ONE, and along the way discusses the gathering momentum of the OA movement and such related issues as impact factors and predatory publishers.

As an Open-Access Megajournal Cedes Some Ground, a Movement Gathers Steam

In short, PLOS ONE — now consistently publishing around 30,000 articles a year — has attracted much more company in its mission to build huge stocks of freely available scientific research. “Since PLOS ONE’s tremendous success, everyone and their grandmother has created a megajournal,” said David J. Solomon, an emeritus professor of medicine at Michigan State University who studies open-access economics.

After years of traditional journals battling the open-access movement, said another analyst, Jevin D. West, an assistant professor of information studies at the University of Washington, “look at all the major publishers — they’re all playing now.”

Wikipedia as “bootlegger”

Wikipedia-logo-v2-enIn a new article, Amplifying the Impact of Open Access: Wikipedia and the Diffusion of Science, the authors analyze Wikipedia citations for presence of high impact journal articles and open access articles. Their conclusion:

We found that across languages, a journal’s academic status (impact factor) routinely predicts its appearance on Wikipedia. We also demonstrated, for the first time, that a journal’s accessibility (open access policy) generally increases probability of referencing on Wikipedia as well, albeit less consistently than its impact factor. The odds that an open access journal is referenced on the English Wikipedia are about 47% higher compared to closed access, paywall journals. More over, of closed access journals, those with high impact factors are also significantly more likely to appear in the English Wikipedia. Therefore, editors of the English Wikipedia act as “bootleggers” of high quality science by distilling and distributing otherwise closed access knowledge to a broad public audience, free of charge. Moreover, the English Wikipedia, as a platform, acts as an “amplifier” for the (already freely available) open access literature by preferentially broadcasting its findings to millions. There is a trade-off between academic status and accessibility evident on Wikipedias around the world.

image: https://commons.wikimedia.org/wiki/Wikipedia/2.0