The use of JCR and JPI in Measuring
Short and Long Term Journal Impact

Presented by
Eugene Garfield
Chairman Emeritus, ISI
Publisher, The Scientist
3501 Market Street
Philadelphia, PA 19104

Tel. 215-243-2205
Fax 215-387-1266
email: garfield@codex.cis.upenn.edu
Home Page: www.EugeneGarfield.org

Presented at
Council of Scientific Editors Annual Meeting
May 9, 2000

I first mentioned the idea of an impact factor in 19551. At that time it did not occur to me that impact would one day become the subject of widespread controversy. Like nuclear energy, the impact factor has become a mixed blessing. It has been used constructively to select the best journals for Current Contents® and the Science Citation Index,® and for library collections. However, it has been misused in many situations, especially in the evaluation of individual researchers.

In the early 1960s, Irving H. Sher and I created the journal impact factor to help select journals for the Science Citation Index (SCI®). It was obvious that a core group of highly cited large journals needed to be covered in the SCI. However, we also recognized that certain small journals would not be selected if we depended solely on citation counts2. We needed a simple method for comparing journals regardless of their size. So we created the journal impact factor.

However, the term "impact factor" has gradually evolved, especially in Europe, to mean both journal and author impact. This ambiguity often causes problems. It is one thing to use impact factors to compare journals and quite another to use them to compare individual authors. Journal impact factors generally involve relatively large populations of articles and citations. Individual authors, on average, produce much smaller numbers of articles.

Everything I will tell you today has been stated repeatedly over the past 30 years. A recent tutorial appeared in the Canadian Medical Association Journal3.

A journal's impact factor is based on 2 elements: a.) the numerator, which is the number of citations in the current year to any items published in a journal in the previous 2 years, and b.) the denominator, which is the number of substantive articles (source items) published in the same 2 years. The impact factor could just have been based solely on the previous year's articles. This would give even greater emphasis to current research. Alternatively, a less current impact factor would be based on three or more previous years.

All citation studies can be normalized to take into account such time variables as half life as well as discipline or citation density. The citation density (references cited per source article) is significantly lower for mathematics than the life sciences. The half-life (number of cited years, that cover 50% of the current year’s citations would be longer in physiology than in molecular biology. The impact factors currently reported each year by the Institute for Scientific Information in Journal Citation Reports® (JCR®) may not provide a complete enough picture for slower moving fields with longer half-lives. However, annual JCR data can be cumulated. Regardless, when journals are studied within disciplines, the rankings based on 1-, 7- or 15-year impact factors do not differ significantly, as I reported for 200 journals in The Scientist45. When journals were studied across fields, the ranking for physiology journals improved significantly as the number of years increased, but the rankings within the group did not change significantly.

Hansen and Henrikson6 reported "good agreement between the journal impact factor and the overall [cumulative] citation frequency of papers on clinical physiology and nuclear medicine." However, clinical editors, especially of foreign language journals, are not pleased with impact evaluations since the international research and clinical literature is dominated by English. Local clinical journals are by definition less relevant for most researchers, and cited less frequently. They are of great interest to drug firms for marketing reasons.

Journal Citation Reports tacitly imply that editorial items in such diverse journals like Science, Nature, the JAMA, CMAJ, the BMJ, and Lancet can be neatly categorized. Such journals publish large numbers of items that are neither traditional substantive research nor review articles. These items (e.g., letters, news stories and editorials) are not included in JCR's calculation of impact. Yet we all know that they are cited, especially in the most recent year. However, the JCR numerator includes citations to all items published in these journals. The assignment of article codes is based on human judgment. A news story might be perceived as a substantive article, and a research letter might not be. Furthermore, no effort is made to differentiate clinical versus laboratory studies or, for that matter, practice-based versus research material.

There is a widespread but mistaken belief that the size of the scientific community that a journal serves affects the journal's impact factor. While the larger journals receive more citations, they must be shared by the equally larger number of published articles. Many articles in large fields are not well cited, whereas articles in small fields may have unusual impact. Therefore, the key determinants in impact are not the number of authors or articles in the field, but rather the mean number of citations per article (density) and the half-life or immediacy of citations. This distinction was explained many years ago in an essay on "Garfield's constant"7.

The size of a field, however, will determine the number of "super-cited" papers. Some are theoretical while some will be methodology papers. Thousands of methodology papers do not achieve citation distinction. In fact, citations to the super-cited papers rarely affect the short-term impact factors reported in JCR. They do have a significant effect when one calculates long-term impact factors. Some analysts censor out such papers since their inclusion may distort the data inordinately.

The time required to review manuscripts may also affect impact. If reviewing and publication are delayed, and references to articles are no longer current, they will not be included in the JCR impact calculation. Even the appearance of articles
on the same subject in the same issue of a journal may have an effect.
Opthof8 recently showed how journal impact performance varies from issue to issue.

For greater precision, it is preferable to conduct item-by-item journal audits so that any differences in impact for different types of editorial material can be taken into account9. As stated earlier, for a small number of journals a bias may be introduced by including in the numerator citations to items that are not part of the denominator of source articles. However, most journals publish primarily substantive research or review articles. Therefore, statistical discrepancies are significant only in rare cases. The JCR data have come under some criticism for this reason among others10.

Most editorial discrepancies are eliminated altogether in another database called the ISI Journal Performance Indicators (JPI) (http://www.isinet.com/isi/products/rsg/products/jpi). This annual compilation now covers citations from 1981 to 1998. Since the JPI database links each source item directly to its citations, the impact calculations are more precise. Using JPI you can also obtain cumulative impact measures for longer periods. For example, the cumulated impact for CMAJ articles published in 1981 is 9.04 (derived by dividing the number of articles published in CMAJ that year [224] into the number of citations to CMAJ between 1981 and 1998 [2024]). Using JPI data, I was able to calculate 7- and 15-year impact factors for the 200 high-impact Journals mentioned earlier.4,5

In addition to helping libraries decide which journals to purchase, journal impact factors are also used by authors to decide where to submit their articles. As a general rule, the journals with high impact factors are among the most prestigious today. The perception of prestige is a murky subject. Some would equate prestige with high impact. However, some librarians argue that the numerator in the impact-factor calculation is in itself even more relevant. Bensman11 stated that this 2-year citation count is a better guide to journal significance and cost-effectiveness than is the impact factor.

Journal impact can also be useful in comparing expected and actual citation frequency. Thus, when ISI prepares a "Personal Citation Report" it provides data on the expected citation impact not only for a particular journal but also for a particular year, because impacts change from year to year. For historical comparisons, a 1955 article cited 250 times might be considered a "citation classic," whereas the threshold for a 1975 article might be 400 and a 1995 article 1000. These are somewhat arbitrary thresholds. When we solicited author commentaries on Citation Classics we often chose the most-cited papers for a given journal which might be the only one in its field.

The use of journal impact factors instead of actual article citation counts for evaluating authors is probably the most controversial issue. Granting and other policy agencies often wish to bypass the work involved in obtaining actual citation counts for individual articles and authors. Arguably, recently published articles may not have had enough time to be cited, so it is tempting to use the impact factor as a surrogate, virtual count. Presumably the journal's impact and the mere acceptance of the paper for publication is an implied indicator of prestige and expected subsequent citation. Typically, when the author's bibliography is examined, a journal's impact factor is substituted for the actual citation count. Thus, use of the impact factor to weight the influence of a paper amounts to a prediction.

While it is true that the average paper is not cited for two or three years, a significant percentage are cited quite rapidly. Indeed, there is a myth in citation analysis that recent papers cannot be evaluated. But many papers achieve rapid impact. Indeed, their citation frequency in the first six to eighteen months indicate they are putative Citation Classics. This pattern of immediacy has enabled ISI to identify "hot papers" in its bimonthly publication Science Watch.® However, full confirmation of high impact is generally obtained two years later. By the time The Scientist interviews authors of such "hot papers," the field has usually moved on to another key phase of development. A series of such hot papers may be a predictor for Nobel Class recognition.

Of the many conflicting opinions about impact factors, I believe that Hoeffel12 expressed the situation succinctly.

Impact Factor is not a perfect tool to measure the quality of articles but there is nothing better and it has the advantage of already being in existence and is, therefore, a good technique for scientific evaluation. Experience has shown that in each specialty the best journals are those in which it is most difficult to have an article accepted, and these are the journals that have a high impact factor. These journals existed long before the impact factor was devised. The use of impact factor as a measure of quality is widespread because it fits well with the opinion we have in each field of the best journals in our specialty.

References:
1. back to text   Garfield  E.  “Citation indexes to science:  a new dimension in documentation  through the association of  ideas,” Science 122:108-11 (1955).
http://garfield.library.upenn.edu/essays/v6p468y1983.pdf

2. back to text   Brodman  E. "Choosing physiology journals," Bulletin of the Medical Library Association 32:479 (1960)
 
3. back to text  Garfield E. "Journal impact factor: a brief review," Canadian Medical Association  Journal,  161(8):.979-80 (1999).
http://www.cma.ca/cmaj/vol-161/issue-8/0979.htm

4. back to text  Garfield  E. “Long-term vs. short-term journal impact: Does it matter?”  The Scientist 12(3):10-2 (1998)
http://www.the-scientist.library.upenn.edu/yr1998/feb/research_980202.html

5. back to text   Garfield E. “Long-term vs. short-term journal impact (part II).”  The Scientist 12(14):12-3 (1998)
http://www.the-scientist.library.upenn.edu/yr1998/july/research_980706.html

6. back to text   Hansen HB, Henriksen JH. “How well does journal "impact" work in the assessment of papers on clinical physiology and nuclear medicine?” Clinical Physiology 17(4):409-18 (1997)

7. back to text  Garfield E. Is the ratio between number of citations and publications cited a true constant? Current Contents No. 6 (Feb 9, 1976). Reprinted in Essays of an Information Scientist, Volume 2. p. 419-21.  Philadelphia:  ISI Press.
http://garfield.library.upenn.edu/essays/v2p419y1974-76.pdf

8. back to text  Opthof T. “Submission, acceptance rate, rapid review system and impact factor,” Cardiovasc Research 41(1):1-4 (1999)

9. back to text   Garfield E. “Which medical journals have the greatest impact?” Annals of Internal Medicine 105(2):313-20 (1986)
http://garfield.library.upenn.edu/essays/v10p007y1987.pdf

10.  back to text  Van Leeuwen TN, Moed HF, Reedijk J. “JACS still topping Angewandte Chemie: beware of erroneous impact factors.” Chemical Intelligencer  3:32-6 (1997)

11. back to text  Bensman SJ. “Scientific and technical serials holdings optimization in an inefficient market: a LSU serials redesign project exercise,” Library Resources & Technical Services 42(3):147-242 (1998)

12. back to text  Hoeffel C. “Journal impact factors [letter]”, Allergy 53(12):1225 (1998)