Tsakonas, Giannis and Paptheodorou, Christos. (Eds.). Evaluation of digital libraries. An insight into useful applications and methods.. Oxford: Chandos Publishing, 2010. xxiv, 275 p. ISBN 978-1-84334-484-1. £49.50 $85.00

The editors of this volume have brought together an interesting set of contributions on the subject of evaluating digital libraries, with the emphasis chiefly upon user evaluation, although Saracevic in his introduction to the collection, points out that 'evaluation' means more than this, drawing on the long-established division into efficiency, effectiveness and cost-effectiveness. Curiously, he omits cost-efficiency; although, on second thoughts, this is not so difficult to understand. Cost-efficiency is a competitive concept: given the choice of have your house painted to the same quality standard and in the same time, you will generally choose the lower-priced house painter. In the digital library world, however, the holders of digital resources (e.g., the ACM Digital Library, or the publishers of scientific journals, such as Elsevier) generally have a monopoly position with regard to those resources: there is not competition for access to the same resource. In the case of smaller digital collections, such as those of the local history collections in public libraries, again, there is no competition, so notions of cost-efficiency are held to be of less significance than ensuring that one does the appropriate job at the lowest cost - cost effectiveness.

Following the introduction, the remaining twelve chapters are divided into four sections. Part 1 - To whom it may concern, has two chapters: first Franklin, Kyrllidou and Plum on the collection of relevant data from local and external sources, then Khoo, McArthur and Zia explore the evaluation of the National Science Digital Library from the perspective of its funding agency, the National Science Foundation, and draw some conclusions for the evaluation of digital libraries in general.

Part 2 - What to place under the evaluation lens, offers four chapters on usability (Jeng) - a brief and rather basic account; users and evaluation (Garoufallou, Siatri and Hartley), which reviews approaches to the study of information behaviour and some of the literature on user-evaluation of specific digital libraries; an infrastructure for performance evaluation (Agosti and Ferro), which is chiefly about the DIRECT system for evaluating the information access components of digital libraries; and the use of log analysis for evaluating user behaviour (Nicholas). The last two chapter are probably the most directly useful for anyone considering evaluation.

Part 3 - Behind the evaluation curtain has three chapters dealing respectively with the interaction of design and evaluation (Blandford and Bainbridge); outcome assessment (Tsakonas and Papatheodorou) and what digital library service quality may look like (Kyrillidou, Cook and Lincoln). The last of these explores a number of approaches to quality assessment, including the adaptation of LibQual to DigiQual and discusses the conduct of surveys to obtain user perceptions.

Finally, Part 4 - How to conduct an evaluation activity (why not simply, How to conduct an evaluation?), takes the methods part of the last paper of the previous chapter in three chapters: using a logic model (Khoo and Giersch), which seems to be a new name for the old idea of evaluating inputs, activities and outcome; using qualitative methods (Monopoli), which is simply a more or less standard account of the qualitative methods that can be used to obtain user's perceptions; and using quantitative methods - this one appears to be rather misleading to me, since it employs chi-squared measures in the analysis of data. This measure assumes a normal distribution of the population and random sampling of that population. Neither of these seem to hold in the study explored here. It's a little dangerous, I think, to advocate statistical methods without being fully aware of the circumstances under which they may be appropriately used.

As with all compilations of this kind, it is a mixed bag: some of the papers are very basic, some are more advanced. Overall, however, the librarian interested in exploring how to carry out the evaluation of the library's digital offerings will find a guide to help them to ask the right kinds of questions to advance further.

Professor Tom Wilson
August, 2010