Scientific Research, 1(8), p. 4-6, August 1966.


LETTERS



More on the 'dark side'...


Dear Sir:

In the June 1966 issue of SCIENTIFIC RESEARCH there is an unsigned article on “The dark side of automated research.” Whoever wrote it was smart to keep his name off of it. For an article whose topic is “mediocrity” this one would qualify, but in sore respects it deserves a much lower rating. I hope that the kind of scholarship exhibited by this article is not typical of what we can expect in the future from this new venture.

In the first place, 1 think the writer twists other people's comments to mean what he chooses them to mean. For example, he cites Derek Price's book, but does not give a specific page number so that it is (for practical purposes) not possible to verify the statement. As far as 1 can rernember, Price made no such statement, but rather showed quantitative criteria for measuring creativity in the past, present and future. There is mediocrity now, and there was mediocrity in the past. There is much more of very good work and much more of very bad. There is no evidence that either grows more rapidly than the other.

On page 29, column 1 he states that “there will be increasing premium assigned by this index on increasing mediocrity.” The index presumably refers to a citation index count, and once again 1 cannot tel! what he is referring to when he “cites” reference # 4, B. V. Dean's “Operation Research in R&D.” I'd like to see the exact quote he has in mind. Without referring to Dean, however, ¡ can state that your writer's statement is absolute poppycock and reveals him to be a man who has a flair for the English language but hasn't the slightest conception of what he is taking about.

If anything is true, then it is fair to state that citation indexing will eliminate more mediocrity in the literature than any other device we have so la developed. The most significant works of science are invariably highly cited. By this ¡ mean that works that are universally recognized as important are highly cited works. This does not mean that a very important work, which is as yet unrecognized by the scientific community, may not go unnoticed and uncited. But there is absolutely no evidence that a mediocre work is frequently cited or not. Your writer makes the assumption or attributes to someone else the same assumption that “mediocre papers will tend to cite other mediocre papers more frequently.

Hasn't he ever heard that the mediocre guy is the one who regularity cites Einstein and other eminent authorities to establish his own bona fide?

If you would like an example of the potential value of citation measurements, then consider that theca work of Frank Rosenblatt, cited in the same article as a taste of things to come. Remember that this is, relatively, a highly cijted work, and though it comes as no surprise to me, this determination could be made “automatically.” Much as we may deplore it there are no other reliable measures available for such evaluation in an era of Big Science.

Your editors would be wise to examine the Science Citation Index to determine what the scientific community consider to be significant. I should hope that this would not prevent them from occasionally doing sorne intuitive thinking that wooed reveal an important contribution consigned to oblivion by his (the contributor's) peers.

EUGENE GARFIELD
Director
Institute for Scientific Information
Philadelphia, Pennsylvania
 
 

...and an SR rebuttal


Derek Price (“Little Science, Big Science,” Columbia Univ. Press) does indeed discuss criteria for measuring the creativity of scientists in the past, present and future—and he arrives at conclusions which Dr. Garfield seems to have overlooked.

The entire thrust of Price's stimulating book is based on the derivation of a “natural” law reflecting the distribution of scientific ability within the population of scientists. This law turns out to be a simple inverse-square law (p. 43) if number of scientists (ordinate) is plotted against the measure of ability (abscissa). From this law Price concludes (p. 46) that if, for example, there are 100 scientists, half of all the scientific productivity will have been due to only the ten most notable scientists. Perhaps more serious is the result (also on p. 46) that the total number of scientists goes up as the square of the number of good ones and that the total number of scientists doubles every 10 years but the number of noteworthy scientists only every 20 years!

SR's reading of these statements is that (to quote from SR's article), “Price's analysis shows that the impetus to turn the flywheel of science comes from a very few individuals.” This does not “twist” Price's observations. If anything, the more serious implications of the growth rates mentioned were ignored; they would only have strengthened SR's case.

From this point we went on to equate the majority of researchers to the counter-balance of this flywheel. Finally, for want of any truly objective measure of mediocrity we equated the counter-balance of the flywheel, which serves simply to maintain rotations, to mediocrity.

It is not clear what Dr. Garfield is objecting to in his anxiety about reference 4. No claim was made that this reference supports or agrees with our fear of the possibility of a “mediocrity explosion” brought about by automation. It was cited only as a specific point of departure, an example from typical efforts to measure scientific value by means of literature citation counts.

It is hoped that this puts to rest Dr. Garfield's concerns for the “scholarship” of SR. The remainder of his letter seems to us a matter of honest difference of opinion and an excellent subject for debate. We invite comments from our readers.—Ed.