BOOK AND SOFTWARE REVIEWS


Cronin, Blaise and Sugimoto, Cassidy R. (Eds.). Beyond bibliometrics: harnessing multidimensional indicators at scholarly impact. Cambridge, MA: MIT Press, 2014. viii, [2], 466 p. 978-0-262-52551-0


As the editors of this collection note in the Preface, the term bibliometrics has a rather antique air about, given that its etymology refers to the book, while, generally, the techniques of bibliometrics are applied mainly to journals or journal papers, and, increasingly, to Websites, social media messages, and much more. However, I think they are right in sticking to the term, since, as they also note, the alternatives, such as webometrics also have similar restrictions. The collection of papers presented here has the aim of promotiing:

critical reflexivity within the field and [fostering] a more enlightened appreciation of the pros and cons of metrics-based assessment among relevant communities of practice.

Many collections of this kind, often put together after advertising a 'call for chapters' appear to have little in the way of a guiding intelligence behind them, ending up as a mish-mash of vaguely related pronouncements on some subject, such as digital libraries, academic library management, knowledge organization or whatever. Happily, this is not one of those collections: I would guess that the editors really have been editing and exercising quite rigorous control over what was ultimately produced for publication. The authors are well-known practitioners in the field of bibliometrics, or in areas related to the history and philosophy of the idea, and the chapters are clearly written and well-presented. In addition, and what might be considered a small point, the editors acknowledge the work of two assistants who worked on checking the bibliographies. In an age when researchers appear to be largely incompetent in such matters, never having been subject to the rigorous discipline of the cataloguing rules, this is a blessing for anyone reading the book who wants to find the sources listed.

The book is divided into five parts: history, critiques, methods and tools, alternative metrics, and perspectives. Of these, I was most interested in the history, the critiques and the alternative metrics. Blaise Cronin's opening chapter moves quickly from the foundation of the scientific journal to the modern diversity of the application of bibliometric techniques, concluding that:

Pandora's box has been opened and the challenge will be to harness the proliferation of alterative metrics intelligently to the assessment of scholarly impact while simultaneously guarding against abuses—be they inadvertent, opportunistic, or engineered—of the system, whether by scholars themselves of those who mandate and superintend the ever more complex processes of performance evaluation that have become an inescapable feature of the higher education landscape.

to which we can only add, 'Amen', although I suspect that the impulse of bureaucracy is towards what gives it answers they want, rather than soundly-based data open to multiple interpretations.

Cronin's chapter serves as an introduction to the book as a whole, and a more thorough analysis of the history of bibliometrics is presented in Chapter 2, by Nicola De Bellis, who, quite properly, treats it as an outcome of the emergence of statistics as a scientific discipline. De Bellis, however, is well aware of the limits of statistical analysis and of its application in attempts to measure the unmeasurable. After reviewing the early examples of bibliometric studies of scholarly production and the Gene Garfield's development of the citation index, and its extension to cyberspace, he concludes that:

Fully embracing the ambiguities inherent in the history of science-metrics, then, might just be a humble first step toward next-generation reseaerch evaluation reflecting the consensus of a wide range of scholars and interested parties, not just bibliometricians.

That conclusion leads nicely into what I found to be one of the most interesting chapters in the Critiques section, Criteria for evaluating indicators by Yves Gingras. Gingras identifies three 'necessary criteria' for evaluating indicators: adequacy for the object measured; sensitivity to the 'intrinsic inertia' of the object measured, and 'homogeneity of the dimensions' of the indicator. He goes on to show that both the Shanghai ranking of unversities and the h-index fail to satisfy these criteria. As he says:

it is surprising that so many university presidents and managers lose all critical sense and take such rankings at face value. Only a psycho-sociological analysis of high-ranking administrators could potentially explain the appeal of a grading system that has no scientific basis. (p. 119).

Quite! Gingras goes on to suggest that university administrators should learn how to evaluate bibliometric indicators before trying to use them in making policy decisions (and the same may be said for national research evaluation bodies) and, key point, that the policy decisions made on these indicators affect people and that, ethically, it is necessary to ensure their validity in terms of the three criteria.

The section on Methods and tools, consists of seven chapters, each of which has something important to say; covering all of them in a review, however, is not possible, so I shall comment on just one: A network approach to scholarly evaluation, by West and Vilhena, proposes the network measure the eigenvalue as an alternative to the journal impact factor. The argument is that the eigenvalue takes into account the number of outgoiong citations from a journal, proposing that the more citations there are, the less each one is 'worth'. Whatever the value of this approach, it seems to me that it still does not take into account the size and specialisation of the audience for a journal, indeed, no measure that I know of takes this into account. This is, to my mind, a severe limitation of the journal impact factor: for example, in the 'Information science and library science' category of the Journal Citation Reports, the Journal of Infometrics has a five-year impact factor of 3.609, while JASIST has an impact factor of 2.381; but the Journal of Infometrics has a relatively small audience of researchers in that specialised field while JASIST is what might be called a 'general purpose' journal covering the entire field of information science. To think that the impact factor should indicate the 'quality' of either of these journals relative to one another is, clearly, ridiculous. Institutions, however, are using the impact factor make these comparisons; academic staff are being told to publish in certain journals and not in others, because some have a higher impact factor than others. Whether the eigenvalue measure would make much difference, I'm not in a position to say.

I've been interested in the notion of alternative metrics (or altmetrics) for some time, feeling that the standard citation approach did little to unveil the 'impact' of research more generally. In May 2002, I posted a message on the JESSE discussion list on the potential use of search engine outputs to assess the impact of research. This generated a fair amount of discussion, which was cited by what must be one of the earliest pieces of research on the subject by Vaughtan and Shaw (2003). Since then the field has burgeoned and this book contains six chapters under the heading of Alternative metrics. Alternative metrics, based generally on search engine output, social media interaction, collaborative bookmarking systems, and so on, are now used for a wide range of impact assessment purposes. Jason Priem discusses the field in general; Kousha and Thelwall explore a variety of Web impact measures for research evaluation; Bar-Ilan and colleagues explore blogs and reference managers (such as Mendeley) as sources of metrics; Haustein looks at Readership metrics, which includes downloads, social bookmarking and social tagging. I'm not sure that 'readership' is the right word to use here, these measures are, perhaps, more in the nature of 'interest' indicators, since much is downloaded and never read, just as much used to be photocopied and never read. The same applies to bookmarking—much may be bookmarked and never read. An indication of the level of interest in a paper or a topic is still useful, however; Hook considers a very different, and highly specialised application of altmetrics in Evaluating the work of judges, noting, in passing, that citation indexing originated in law; finally, Sugimoto discusses Academic genealogy, defined as 'the quantitative study of intellectual heritage operationalised through chains of students and their advisors'. Sugimoto suggests that the concept may be of 'limited scholarly value' but that some varieties of academic genealogy may help exploring disciplinary histories and interdisciplinarity. I wonder, however, how far down the family tree one goes before the influence of the 'father or mother' scientist becomes so attenuated as to be almost non-existent?

The final two chapters, under the section heading present the publisher's perspective on bibliometrics (Kamalski, et al.) and its use in science policy (Lane, et al.).

In all, the editors have produced an excellent review of the current state of bibliometric research and pointers to its future and I imagine that every bibliometrician will want it on his or her desk. Almost inevitably, however, the focus is on research evaluation and there are questions to be asked about the relevance of bibliometrics and alternative metrics for the evaluation of the impact of research more widely. Taking up the point that Gingras makes on how evaluation affects people, can we really say that a researcher who makes little impact in his/her research field but has a major influence on the development and policy in some area of professional practice is worth less? Until research funders and university administrators consider impact in this more general sense, policy is likely to be flawed.

References

Vaughan, L. & Shaw, D. (2003). Bibliographic and Web citations: what is the difference? Journal of the American Society for Information Science and Technology, 54(14), 13131322.

Wilson, T.D. (2002, May 6). Web citation. [Online forum comment]. Retrieved from http://listserv.utk.edu/cgi-bin/wa?A2=ind0205&L=jesse&T=0&F=&S=&P=720

Professor Tom Wilson
Editor-in-Chief
August, 2014