]> Monitoring and Evaluating Information-Related Research

Monitoring and Evaluating Information-Related Research

Professor Tom Wilson, Department of Information Studies, University of Sheffield

This is a late draft of a paper delivered at a European Conference on funding research in the library and information studies field organized by the British Library R & D Department in December 1994.


All agencies that provide research support are concerned about monitoring and evaluating the research programmes they fund. Most use very simple means of monitoring: if the project finishes when intended, and if no additional funds are sought for its completion, it is assumed that the project management has been effective; if research papers or conference papers result from the research, the project is assumed to have made an impact, either on the field of practice to which the research relates, or on the research community generally.

Given that all organizational resources are scarce, this kind of strategy is probably all that can be achieved and, indeed, some organizations do not even go so far as to monitor outputs in the form of publication, or in any other form.

This paper sets out the considerations that an agency may need to take into account when considering a more structured programme of monitoring and evaluation.

Information-related research

Information-related research takes a variety forms, in a variety of sub-disciplines, meaning that methods of monitoring and evaluation may need to be adapted to the aims and methods of the research project or programme. Thus, some research consists of developing retrieval algorithms, where the general project design and methodological effectiveness may be the principal characteristics to be monitored and which, indeed, form key criteria for the award of a grant in the first instance. On the other hand, more applied research, adopting social science research methods, which may involve survey research, interviewing, focus groups and the like, depends again on methodological effectiveness but also on key issues such as gaining entry to the organizations or the population of the survey. Finally, participative modes of research, such as action research, are undertaken on rather different premises and the extent to which participation is gained, or the effectiveness with which change is implemented, are key concepts.

Research activities to be monitored

Project management

The responsibility for managing a project rests firmly with the Project Head, Director, or whatever you want to call him or her. However, the funding agency must be satisfied that the project is being managed in an effective manner and that the money is being well spent. To this end, some projects have Advisory or Steering Committees to satisfy the funders about these issues. They are, of course, expensive and use up resources that otherwise might be spent on the research process and it is unlikely that any agency would want to control very small projects of less than 100,000 in this way.

There are three aspects of project management that funders need assurance on: the effectiveness of the methodology proposed and used in the project; financial management; staffing; and the deliverables.

Methodological effectiveness

Given the points made above about the nature of information-related research, it is evident that no single methodological position (in the sense of the underlying philosophy of research) and, hence, no single "toolkit" of methods will be appropriate in all circumstances. Methodology and methods must be keyed to the research objectives and this aspect of project management usually finds its initial expression in the research proposal. If it doesn't, the funding agency is likely, at best, to ask why it isn't there or, at worst, reject the proposal. However, a research project is a social phenomenon - circumstances can alter the nature of the research in all kinds of ways and methods may need to change as a result - the project manager, therefore, must be alert to the need to adapt in the light of problems experienced with the intended set of methods.

Financial management

Although the Project Head will need to monitor spending fairly closely, responsibility for financial management in the general sense will usually lie with the Finance Department of the academic institution or other organization in which the work is being undertaken. The funding agency may call for quarterly or half-yearly accounts, and may have rules for the release of resources at specific times (such as being dependent upon the delivery of an interim report), and, clearly, if financial difficulties are experienced, the funding agency needs to know as soon as possible.


Under this heading, all the funding agency needs to know is that the research organization's normal appointment procedures have been followed. Most academic institutions have fairly strict rules about advertising posts, and about the constitution of interviewing panels and these generally satisfy external funding agencies. However, when I was first appointed to a research post at Sheffield, one member of the interviewing panel was a representative of the funding agency.


It is through the submission of reports that a funding agency can monitor research activity most effectively. For small projects lasting only a few months a final report is probably the only thing worth doing. However, if a project lasts for, say, three years, then a funder is likely to ask for at least annual reports - possibly two interim reports and a final report. Or, if the work is phased - a report at the end of each phase. Some deliverables, however, may be products in the shape of software or, for example, prototype machines.

Research evaluation

Before we can evaluate research activity we need a model of the process of diffusion. Too often, diffusion is confused with dissemination - but diffusion is a much more complex process than dissemination and the successful diffusion of research findings, particularly if we intend to have an impact on a field of practice requires much more effort than is implied by the term dissemination, connected as it is to publication.

For example, a study by Martyn & Cronin (1983), which looked at the impact of library and information research paid more attention to dissemination than to utilization and appeared to follow a very simplified model of the process - one that can be represented as:

Figure 1: the simplistic model of the imapact of research

Earlier, Maguire (1980) had noted that, to gain acceptance, research needed a "product champion" in the research community, and a "gatekeeper" in the practice community, to ensure effective flow between the two domains. She also recognized the importance of education in the process of diffusion noting: "It is important that those who teach are either involved in, or else very much aware of ongoing research, and that they put this into their teaching; the recipients of this are, after all, the practitioners of tomorrow." As noted later, this perspective was embodied in our own research on impact.

A proper theoretical framework for evaluation, however, requires that attention be given to the sociological literature, where the study of research diffusion and utilization has had a number of investigators. Among these, Weiss (1979) is notable for her work on research utilization. Weiss points out that the process is much more complex than the "simplistic science" model suggests. Many factors enter the research utilization process: research knowledge is only one form of knowledge available to the practitioner, it may enter the decision-making process by many routes and may be filtered in various ways along those routes, frequently it may bear only tangentially on the needs at hand, and often political, economic and even ideological issues may hinder or help its application. This more complex model is at least partially expressed in the following diagram:

Figure 2: a complex model of research impact

This model presents a much richer picture, expressed in a somewhat different way from that of Weiss but embodying, implicitly, the same issues. It suggests, for example, that someone has to accept the proposition that the results of a particular research programme or project are relevant to practice and that, to a degree, his or her opinions and attitudes have to undergo change before the person believes that it is worthwhile to begin negotiation to bring about a change of view within the organization. This latter process may involve all kinds of political activity and actual implementation will depend on resources, the depth of opinion change within the organization and so on. Even then, not all change which is implemented is actually adopted - adoption requires the absorption of the new processes, activities, practices, or whatever, into the everyday life of the organization.

Productivity: outputs

When we turn to research productivy, however, we are on firmer ground, since we can assess quite readily certain key aspects of dissemination efforts that will affect the extent to which the new ideas, etc., can be diffused throughout the intended community of interest.


The key variable in productivity is not the volume or variety of output, but the extent to which the appropriate audiences have been targeted by the researcher; thus, in research that relates to a particular occupational group, such as teachers, business-people, planners, social workers, or whatever, efforts must be made to direct diffusion efforts at those groups, rather than expecting them to respond to efforts directed at the information professional. The "monitor", therefore, needs to ask: "Are the dissemination/diffusion activities properly targeted."


Given effective targeting, variety in output is virtually inevitable. For example, if we want to reach a research community, academic journals and scientific conferences are an appropriate means for delivering research results. These forms, however, are completely ineffective if the research aims to affect a practice community (e.g., business-people, social workers, teachers, etc.). These potential users can only be reached by producing publications for practice journals, or by running training courses, or by speaking at conferences, workshops, etc., organized for that community. The monitor of research outputs, therefore, ought to be asking, "What efforts are made to reach the intended audiences through all possible means?"


The volume of outputs from a research programme ought to be readily determinable, since most researchers, whether they are academia or in consultancy organizations, are anxious to keep an account of their "products" so that they can be cited in support of further research grant proposals. It is simply a matter of counting.


Unfortunately, when we turn our attention to a consideration of the impact of information research, we find ourselves in difficulty again. The commonest method of assessing impact is to use citation data: however, the use of such data assumes the simplistic model of research impact set out in Figure 1 above, whereas, for information research, the more complex model set out in Figure 2 is more appropriate. This more complex model necessitates the use of other methods of data collection and analysis which are, necessarily, much more costly and time-consuming than the analysis of readily available citation data.

Consequently, citation studies can be used only for assessing the impact of basic, scholarly research and not for practitioner related research because, in general, practitioners do no read the research, and do not write papers in which they cite academic research.

Surveys are more useful for assessing productivity than impact, but they can be used. There are difficulties, however, in the choice of respondents - whom does one survey, that is, can the intended audience of the research be identified? If this can be done, but a low response rate could be expected - and, then, what would the numbers mean?

Qualitative research, by which I mean, here, mainly lengthy, mainly unstructured interviewing has more potential in determinging research impact but faces the same problem as surveys - whom to interview? In the case study work described below, we contacted researchers on the projects who had moved on, students involved in some way who have moved into jobs, people who bought research reports, and potential readers of published papers.

In short, the complex model of research impact requires a multi-method approach if we are to gain a proper understanding of the ways in which research findings can have an impact on policy and practice.

Case study of the impact of information research

Finally, I would like to describe a research project (funded, appropriately, by the BLR&D; Department), which used a variety of techniques to explore the impact of three research programmes undertaken over a number years at Sheffield.

The programmes related to research on generic chemical structure retrieval (Prof. Lynch); computerised IR techniques (Dr., now Prof. Willett) and information-seeking behaviour (myself). We used a multi-modal approach, employing interviews with former researchers and former Ph.D. students and others involved in the research; citation studies; analysis of report purchase records at the BL; and questionnaire surveys. This approach was based on the assumption that a rich picture of the nature of research impact could be obtained only in this way.

In a brief report of this kind, the flavour of the research and of the nature of the impact of research can be indicated by quoting one or two of the people interviewed (Craghill, D. & Wilson, T.D.,1987). First, one of the researchers on the INISS project, who had moved into a new role in a different organization, commented:

This was an example of a researcher moving into practice and taking what was learnt on the project with him. Subsequently, the services he offered were also widely publicized and, hence, there was a second order effect of the research.

In another case, the student who had carried out the pilot for the INISS project as her dissertation topic moved into a post of research officer in a social services department. She commented:

Here, the research methods themselves were the subject of the learning transfer. Again, that researcher went on to use the same kinds of methods in a number of other investigations, some in social services departments, others based in academic research units and, again, the "multiplier effect" can be observed.

In another phase of the case study a questionnaire was issued to a sample of fifty-nine social work or social administration departments in universities. Thirty-five responses were received. The questionnaire included a list of six INISS publications - these six publications received a combined total of forty- eight indications that lecturers were aware of them - forty-four of these were to publications in the social work literature, while only four referred to the information science literature. This demonstrated clearly the importance of ensuring that the appropriate channels of communication are used when the work is of importance to a field of practice other than librarianship or information work.

As a result of the case study, we can say that the knowledge diffusion process may (and should) involve multiple channels of diffusion, as shown in outline in Figure 3. Publication, in particular, must not be restricted to the scholarly literature of the field, but must involve conference presentations, newsletters, and the professional press of the field of intended impact. Training, may be used in two ways - by involving students in the research process (and the undertaking of information research in academic institutions makes this more likely) and by delivering training programmes based on the research. For example, the principal investigator on the INISS project (David Streatfield) and myself ran training courses at the National Institute for Social Work for about six years, during and after the project. Finally, direct participation of the researchers, who move to other jobs, the students, who do likewise, and the organizations in which the research is carried out, can also increase the probability of effective diffusion of the methods and results of research.


Monitoring research activity is a relatively straightforward task but, like all efforts to gain such information, has a cost associated with it. Any research funding organization can ensure that it knows about the level of activity its funds support and a good deal of the monitoring naturally devolves to the organization granted the funds.

When we turn to evaluation, however, the situation is more complex. The case studies we carried out at Sheffield demonstrated quite conclusively the complexity of research diffusion and utilization and gave lessons for evaluation. This complexity is such that a simple-minded approach to evaluation (which is, of course, the cheapest approach) is almost completely pointless and, if an organization is serious about the measuring the impact of the research it supports, it is necessary to undertake fairly expensive qualitative research of the kind described above.


This page, and all contents, are Copyright (C) 1995 by Professor T.D. Wilson, Department of Information Studies, University of Sheffield

Return to Tom's Page