Journal of the American Society for Information Science
41(3):229-230, 1990

Response to the Panel on Evaluation of Scientific Information and the Impact of New Information Technology

Eugene Garfield

Institute for Scientific Information

Philadelphia, PA 19104





It is difficult in five minutes to deal with the many different subjects in Dr. Philip Abelson�s talk. There are at least 25 different areas I could comment on separately. On Ben Lipetz� paper, I think it would be worth calling to your attention a 1987 paper by Manfred Kochen on how we acknowledge our intellectual debts. (Kochen, 1987).

First, Dr. Abelson mentions that many articles are never cited. Indeed, from 1955 through 1987, the Science Citation Index included 11.5 million source items of all types �articles, reviews, letters to the editor, short communications, technical notes, and meeting abstracts. Of these, 56% were never cited at all over this 32-year period. However, there are really very little data on citedness. We need to know a lot more about what types of items do or do not get cited, and those that are not expected to be cited. For example, journals such as the Review of Scientific Instruments may have very large numbers of uncited papers because they are there for the record only and may require no future discussion. Also, Dr. Abelson says that the publish or perish syndrome has led to the increasing phenomenon of "least publishable units" � fragmentary and essentially duplicate research articles. As these increase, the number of opportunities for not being cited expands. I could give a rather long discourse on uncitedness and think it is a great dissertation topic for students to think about.

Dr. Abelson uses uncitedness to support his point that many journals are not read. I don�t think you can draw that conclusion. If you ask readers, you may find that they read a large number of less cited journals as well as journals like Science. Also, anyone in this room who scans Science regularly is happy to find one or two papers to read in each issue. If you read every article in Science you would never do anything else. But since there are so many readers of journals such as Science and Nature, we assume that all of the material in the big, multidisciplinary journals does get read by someone. That is not necessarily the case. In fact, ISI data show that not every one of the articles in these journals is cited.

Dr. Abelson also indicated that the lay press seems to read only the top five journals. I think we can provide evidence to the contrary. Whatever may have been true in the past may be changing as a result of data that ISI provides on what we call "hot" articles. In our newspaper, The Scientistand in Current Contents we regularly publish studies of highly cited articles. I think that scientists need to do more in helping journalists determine what is significant in their specialties. As an example, chemistry is not particularly well covered in the top five or 15 journals. Neither are ecology, marine biology, and a number of other subjects that are, or should be, of considerable interest to the press.

Apparently, it takes about 14 weeks to review and to accept an article for Science. Just as interesting from the author�s point of view is how long it takes for a paper to be rejected. About 40% of manuscripts are rejected by editorial board members within two weeks. Of the 60% that are sent out for peer review, two-thirds are eventually rejected over the next three months. I don�t know how this compares with other leading journals, but I think that a 2�3 month wait can seem intolerable to some authors , especially when added to the further delay in sending the rejected manuscript to another journal.

Dr. Abelson said that in reviewing more than 100,000 manuscripts over his long career, "relatively few excellent [ones] were rejected." This is a statement we have to think about a little more seriously. Given enough time, I could probably provide evidence of hundreds of articles that were rejected by Science and other important journals that went on to have great citation records. How could it be otherwise? If only 18% are accepted by Science and, if most submissions are of good quality, then there are a large number of quality papers that were rejected. Out of this large number of good papers, it is inevitable that many are going to be published and some will become citation classics.

On the question of reasonable rates of publication, I think it is unwise and unreasonable to think that scientists in any field, especially biomedicine, should be limited to two or three papers per year. Such an injunction would restrict the best quality scientists. We know that a relatively small group of scientists accounts for a large percentage of both publications and citations. It is not unusual for elite scientists to have published hundreds of papers in their lifetimes. In fact, I remember going to a party celebrating Carl Djerassi�s 1,000th paper. The number of people who have published over 500 papers in their lifetimes is substantial, although small in relation to the total population. In a study we did over 25 years ago, we showed that Nobel scientists published 17 times more cited papers than the average scientist in the 1961 SCI file, and were cited 30 times more often. (Sher & Garfield, 1966). Also, in a study of the 1,000 most-cited scientists, we found that the average author published 121 cited papers during the 14-year period from 1965�1978, or about nine papers per year. (Garfield, 1983). For these reasons, it is not a good or practical idea to even try to limit the number of annual publications per author. This is a separate issue from that of asking individuals to name their best five papers when up for faculty review. I think that such exercises are less significant for the best researchers. The question of how you test for "quality" is another topic that deserves discussion.

In another part of his talk, Dr. Abelson criticized cornmercial publishers for greatly increasing the number of journals and, as a consequence, allowing virtually every manuscript to get published eventually. Surely he is aware that the American Chemical Society, American Institute of Physics, American Physical Society, and other nonprofit publishers also create new specialty journals. Are they to be condemned for this? As I recently said in an editorial in The Scientist, it is nonsense to complain about "too many" new journals, just as it would be to complain about too many new scientific discoveries. (Garfield, 1988). Twigging is the inexorable result of realignments in the organization of disciplines. That has been true for decades, and nothing is going to change it, unless we close down the shop. Some of the most successful private journals are the result of the inertia of scientific societies and their unwillingness or inability to respond to the demands and pressures of scientists in almost every field. The idea of trying to suppress the number of journals is futile.

I don�t want to confuse this issue with the very real problem faced by libraries whose budgets are strained by the enormous number of journals available today. But why should libraries feel, to quote Dr. Abelson, "loathe to discontinue subscriptions" started five or ten years ago? There is no law that says every library has to have a complete set of everything. All libraries, including the Library of Congress, have to be selective. Even Science does not get stored at every library.

If a publisher can find 300�400 libraries or individuals willing to buy a new journal, that may be sufficient to support an emerging specialty until it matures. When the journal becomes widely accepted, then comprehensive libraries may have to consider whether to acquire it. On the other hand, they would not have purchased journals that died long before they matured.

Finally, I would like to make a few comments about the Darsee case because no one has really investigated the actual impact of his work. I certainly do not have the resources to do this on a large scale. Bruce Dan, an editor at the American Medical Association, reported humorously that he got 29 feet of printout when he searched the SCI online for papers that cited Darsee. What does that tell us? No effort was made to find out what these citing authors had actually done with the work. How many attempted replication? How many were misled by the work and wasted time and resources? How many incorporated Darsee�s data into their studies? The fact is, scientific research is a cumulative process in which there is a lot of room for error, deliberate or unwitting. I believe that a close examination of the Darsee case would illustrate what is generally con-sidered to be true �that the trivial, uninteresting, and erroneous in science is ultimately ignored, for all intents and purposes.

The last point I will make has to do with the philosophical thrust of Dr. Abelson�s article, with which I agree in general. The increasing inability to replicate experiments, especially in the physical sciences but also in biomedicine, is not necessarily an accident. Scientists work in increasingly larger teams, and the competition for grants, awards, and priority in discovery is intensifying. Even the attempt to reproduce experimental results would put individual researchers behind the rest of the pack. The redundancy, or reproducibility as Dr. Abelson calls it, that existed in science before is now absent. Professor V. V. Nalimov of Moscow State University and philosophers of science such as Karl Popper might, in fact, point out that this aspect of "Big Science" indicates the possible limits of scientific inquiry. As Claude Shannon demonstrated years ago in his work on communication theory, without redundancy there is increasing error in transmission. I think that is what we�re seeing today in scientific communication.

References

(back to text) Garfield, E. (1983). The 1000 contemporary scientists most-cited 1965-1978. Part 1. Essays of an Information Scientist, V. 5. Philadelphia: ISI Press.)

(back to text) Garfield, E. (1988, March 7). Too many journals? Nonsense! The Scientist,2(5), 11.

(back to text)  Kochen, M. (1987). How well do we acknowledge intellectual debts? Journal of Documentation, 43, 54�64. (Reprinted in: Garfield, E.[1989, June 19]). Manfred Kochen: In memory of an information scientist. Current Contents 32(21), 3-14.

(back to text)  Sher, I. H. , & Garfield, E. (1966). New tools for improving and evaluating the effectiveness of research. In M. C. Yovits, D. M. Gilford, R. H. Wilcox, E. Staveley, & H. D. Lerner (Eds.) Proceedings of the conference sponsored by the Office of Naval Research, July 27�29, 1965, (pp. 135�146).New York: Gordon & Breach. (Reprinted in: Garfield, E. (1983). Essays of an information scientist, V.6. Philadelphia: ISI Press.)