In an Open Access Q&A earlier this week, Peter Suber made the case that the Impact Factor is not a good way to assess research quality, particularly in the context of tenure and promotion decision making. He argued for those in the discipline to actually read the articles.
The San Francisco Declaration on Research Assessment includes a good explanation of the IF flaws and suggests abandonment of it as a measure of scholarship quality.
The Journal Impact Factor is frequently used as the primary parameter with which to compare the scientific output of individuals and institutions. The Journal Impact Factor, as calculated by Thomson Reuters, was originally created as a tool to help librarians identify journals to purchase, not as a measure of the scientific quality of research in an article. With that in mind, it is critical to understand that the Journal Impact Factor has a number of well-documented deficiencies as a tool for research assessment. These limitations include: A) citation distributions within journals are highly skewed [1–3]; B) the properties of the Journal Impact Factor are field-specific: it is a composite of multiple, highly diverse article types, including primary research papers and reviews [1, 4]; C) Journal Impact Factors can be manipulated (or “gamed”) by editorial policy ; and D) data used to calculate the Journal Impact Factors are neither transparent nor openly available to the public [4, 6, 7].
A number of themes run through these recommendations:
The need to eliminate the use of journal-based metrics, such as Journal Impact Factors, in funding, appointment, and promotion considerations;
The need to assess research on its own merits rather than on the basis of the journal in which the research is published; and
The need to capitalize on the opportunities provided by online publication (such as relaxing unnecessary limits on the number of words, figures, and references in articles, and exploring new indicators of significance and impact).
This recent SPARC Europe briefing paper tackles the problems with current methods of evaluating research (including the impact factor and h-index) and proposes some future directions:
The most striking aspect of the recent series of Royal Society meetings on the Future of Scholarly Scientific Communication was that almost every discussion returned to the same core issue: how researchers are evaluated for the purposes of recruitment, promotion, tenure and grants. Every problem that was discussed – the disproportionate influence of brand-name journals, failure to move to more efficient models of peer-review, sensationalism of reporting, lack of replicability, under-population of data repositories, prevalence of fraud – was traced back to the issue of how we assess works and their authors.
It is no exaggeration to say that improving assessment is literally the most important challenge facing academia. Everything else follows from it. As shown later in this paper, it is possible improve on the present state of the art.
The “Periodicals Price Survey” for 2014 has just been published in Library Journal. It’s reported that while the general economic climate is positive, “if the broad figures are closely scrutinized, public funding and spending in libraries have not yet recovered to 2008 levels adjusted for inflation or population growth.” As usual, the report provides a diverse range of analyses, e.g. Average 2014 Price for Scientific Disciplines; Average 2014 Price Per Title by Country; Average 2014 Price for Online Journals in the ISI Indexes; Periodical Prices for University and College Libraries, etc. The report forecasts that not much will change price-wise in 2015: “The 2014 6% average price increase is expected to remain stagnant for 2015, hovering in the 6% to 7% range. That 6% seems to be a level of inflation that is neither too hot for libraries nor too cold for publishers, so for the time being, 5.5% is a safe bet. However it is only April, and a lot could change before 2015 pricing is finalized.”
This year’s “Periodicals Price Survey” also has a particularly interesting analysis “Measuring the Value of Journals”. The authors write that price alone is not the sole factor determining value. Increasingly, altmetrics are being utilized to assess the impact of journals. The report explores “the relationship between prices and metrics used to assess journals like Impact Factor, Eigenfactor, and the Article Influence Score.” The authors also analyze the relationships between the cost of periodicals and the number citations.
“Today’s journals are still the best scholarly communication system possible using 17th century technology.”
Jason Priem, altmetrics innovator and creator of such tools as ImpactStory, gives persuasive reasons to “decouple” the journal.
He notes that we have not allowed the web to revolutionize scholarly communication and that “online journals are essentially paper journals delivered by faster horses.”
In addition to using altmetrics as a broader and more meaningful measure of impact, journals could be decoupled. Instead of all journals providing all services separately and redundantly, authors could pick and choose providers of the four major journal functions: dissemination, certification, archiving and registration from various (decoupled) providers. For instance — scholarly societies might provide peer review services and institutional or subject based repositories might provide archiving and registration, while the author might choose to do his own marketing through tweets, blogs and scholarly contacts.
Jason’s own description of this new publishing model is more eloquent. His in depth article has been published in Frontiers in Computational Neuroscience.
Kevin Smith has a recent blog post about his ideas that is helpful as well.