In an Open Access Q&A earlier this week, Peter Suber made the case that the Impact Factor is not a good way to assess research quality, particularly in the context of tenure and promotion decision making. He argued for those in the discipline to actually read the articles.
The San Francisco Declaration on Research Assessment includes a good explanation of the IF flaws and suggests abandonment of it as a measure of scholarship quality.
The Journal Impact Factor is frequently used as the primary parameter with which to compare the scientific output of individuals and institutions. The Journal Impact Factor, as calculated by Thomson Reuters, was originally created as a tool to help librarians identify journals to purchase, not as a measure of the scientific quality of research in an article. With that in mind, it is critical to understand that the Journal Impact Factor has a number of well-documented deficiencies as a tool for research assessment. These limitations include: A) citation distributions within journals are highly skewed [1–3]; B) the properties of the Journal Impact Factor are field-specific: it is a composite of multiple, highly diverse article types, including primary research papers and reviews [1, 4]; C) Journal Impact Factors can be manipulated (or “gamed”) by editorial policy ; and D) data used to calculate the Journal Impact Factors are neither transparent nor openly available to the public [4, 6, 7].
A number of themes run through these recommendations:
The need to eliminate the use of journal-based metrics, such as Journal Impact Factors, in funding, appointment, and promotion considerations;
The need to assess research on its own merits rather than on the basis of the journal in which the research is published; and
The need to capitalize on the opportunities provided by online publication (such as relaxing unnecessary limits on the number of words, figures, and references in articles, and exploring new indicators of significance and impact).