7
BASIC FACTS YOU NEED TO KNOW ABOUT THE IMPACT FACTOR OF A JOURNAL OR ANY ACADEMIC PAPER
The impact factor (IF) or journal impact factor (JIF) of an academic journal is a scientometric index calculated by Clarivate that reflects the yearly mean number of citations of articles published in the last two years in a given journal, as indexed by Clarivate's Web of Science. As a journal-level metric, it is frequently used as a proxy for the relative importance of a journal within its field; journals with higher impact factor values are given status of being more important, or carry more prestige in their respective fields, than those with lower values. While frequently used by universities and funding bodies to decide on promotion and research proposals, it has recently come under attack for distorting good scientific practices.
History
The impact factor was devised by Eugene Garfield, the founder of the Institute for Scientific Information (ISI) in Philadelphia. Impact factors began to be calculated yearly starting from 1975 for journals listed in the Journal Citation Reports (JCR). ISI was acquired by Thomson Scientific & Healthcare in 1992, and became known as Thomson ISI. In 2018, Thomson-Reuters spun off and sold ISI to Onex Corporation and Baring Private Equity Asia. They founded a new corporation, Clarivate, which is now the publisher of the JCR.
Calculation
In any given year, the two-year journal impact factor is the ratio between the number of citations received in that year for publications in that journal that were published in the two preceding years and the total number of "citable items" published in that journal during the two preceding years:
IFy = citationsy
publicationsy -1 + publicationsy -2
For example, Nature had an impact factor of 41.577 in 2017
IF 2017 = citations2017
publications2016 + publications2015
= 74090 880 + 902 = 41.577
This means that, on average, its papers published in 2015 and 2016 received roughly 42 citations each in 2017. Note that 2017 impact factors are reported in 2018; they cannot be calculated until all of the 2017 publications have been processed by the indexing agency.
The value of impact factor depends on how to define "citations" and "publications"; the latter are often referred to as "citable items". In current practice, both "citations" and "publications" are defined exclusively by ISI as follows. "Publications" are items that are classed as "article", "review" or "proceedings paper" in the Web of Science (WoS) database; other items like editorials, corrections, notes, retractions and discussions are excluded. WoS is accessible to all registered users, who can independently verify the number of citable items for a given journal. In contrast, the number of citations is extracted not from the WoS database, but from a dedicated JCR database, which is not accessible to general readers. Hence, the commonly used "JCR Impact Factor" is a proprietary value, which is defined and calculated by ISI and can not be verified by external users.
New journals, which are indexed from their first published issue, will receive an impact factor after two years of indexing; in this case, the citations to the year prior to volume 1, and the number of articles published in the year prior to volume 1, are known zero values. Journals that are indexed starting with a volume other than the first volume will not get an impact factor until they have been indexed for three years. Occasionally, Journal Citation Reports assigns an impact factor to new journals with less than two years of indexing, based on partial citation data. The calculation always uses two complete and known years of item counts, but for new titles one of the known counts is zero. Annuals and other irregular publications sometimes publish no items in a particular year, affecting the count. The impact factor relates to a specific time period; it is possible to calculate it for any desired period. For example, the JCR also includes a five-year impact factor, which is calculated by dividing the number of citations to the journal in a given year by the number of articles published in that journal in the previous five years.
Use
While originally invented as a tool to help university librarians to decide which journals to purchase, the impact factor soon became used as a measure for judging academic success. This use of impact factors was summarised by Hoeffel in 1998:
Impact Factor is not a perfect tool to measure the quality of articles but there is nothing better and it has the advantage of already being in existence and is, therefore, a good technique for scientific evaluation. Experience has shown that in each specialty the best journals are those in which it is most difficult to have an article accepted, and these are the journals that have a high impact factor. Most of these journals existed long before the impact factor was devised. The use of impact factor as a measure of quality is widespread because it fits well with the opinion we have in each field of the best journals in our specialty....In conclusion, prestigious journals publish papers of high level. Therefore, their impact factor is high, and not the contrary.
As impact factors are a journal-level metric, rather than an article- or individual-level metric, this use is controversial. Eugene Garfield, the inventor of the JIF agreed with Hoeffel, but warned about the "misuse in evaluating individuals" because there is "a wide variation [of citations] from article to article within a single journal". Despite this warning, the use of the JIF has evolved, playing a key role in the process of assessing individual researchers, their job applications and their funding proposals. In 2005, The Journal of Cell Biology noted that:
Impact factor data ... have a strong influence on the scientific community, affecting decisions on where to publish, whom to promote or hire, the success of grant applications, and even salary bonuses.
More targeted research has begun to provide firm evidence of how deeply the impact factor is embedded within formal and informal research assessment processes. A review in 2019 studied how often the JIF featured in documents related to the review, promotion, and tenure of scientists in US and Canadian universities. It concluded that 40% of universities focused on academic research specifically mentioned the JIF as part of such review, promotion, and tenure processes. And a 2017 study of how researchers in the life sciences behave concluded that "everyday decision-making practices as highly governed by pressures to publish in high-impact journals". The deeply embedded nature of such indicators not only effect research assessment, but the more fundamental issue of what research is actually undertaken: "Given the current ways of evaluation and valuing research, risky, lengthy, and unorthodox project rarely take center stage."
Criticism
Numerous critiques have been made regarding the use of impact factors, both in terms of its statistical validity and also of its implications for how science is carried out and assessed. A 2007 study noted that the most fundamental flaw is that impact factors present the mean of data that are not normally distributed, and suggested that it would be more appropriate to present the median of these data. There is also a more general debate on the validity of the impact factor as a measure of journal importance and the effect of policies that editors may adopt to boost their impact factor (perhaps to the detriment of readers and writers). Other criticism focuses on the effect of the impact factor on behavior of scholars, editors and other stakeholders. Others have made more general criticisms, arguing that emphasis on impact factor results from the negative influence of neoliberal politics on academia. These more politicised arguments demand not just replacement of the impact factor with more sophisticated metrics but also discussion on the social value of research assessment and the growing precariousness of scientific careers in higher education.
Inapplicability of impact factor to individuals and between-discipline differences.
It has been stated that impact factors and citation analysis in general are affected by field-dependent factors which invalidate comparisons not only across disciplines but even within different fields of research of one discipline. The percentage of total citations occurring in the first two years after publication also varies highly among disciplines from 1–3% in the mathematical and physical sciences to 5–8% in the biological sciences. Thus impact factors cannot be used to compare journals across disciplines.
Impact factors are sometimes used to evaluate not only the journals but the papers therein, thereby devaluing papers in certain subjects. In 2004, the Higher Education Funding Council for England was urged by the House of Commons Science and Technology Select Committee to remind Research Assessment Exercise panels that they are obliged to assess the quality of the content of individual articles, not the reputation of the journal in which they are published. Other studies have repeatedly stated that impact factor is a metric for journals and should not be used to assess individual researchers or institutions