vol. 18 no. 2, June, 2013

Evaluating the performance of information retrieval systems using test collections

Paul Clough
Information School, Univeristy of Sheffield, Regent Court, 211 Portobello Street, Sheffield, S10 2TN
Mark Sanderson
School of Computer Science and Information Technology, RMIT University, GPO Box 2476, Melbourne 3001, Victoria, Australia

Introduction. Evaluation is highly important for designing, developing and maintaining effective information retrieval or search systems as it allows the measurement of how successfully an information retrieval system meets its goal of helping users fulfil their information needs. But what does it mean to be successful? It might refer to whether an information retrieval system retrieves relevant (compared with non-relevant) documents; how quickly results are returned; how well the system supports users' interactions; whether users are satisfied with the results; how easily users can use the system; whether the system helps users carry out their tasks and fulfil their information needs; whether the system impacts on the wider environment; how reliable the system is etc. Evaluation of information retrieval systems has been actively researched for over 50 years and continues to be an area of discussion and controversy.
Test collections. In this paper we discuss system-oriented evaluation that focuses on measuring system effectiveness: how well an information retrieval system can separate relevant from non-relevant documents for a given user query. We discuss the construction and use of standardised benchmarks - test collections - for evaluating information retrieval systems.
Research directions. The paper also describes current and future research directions for test collection-based evaluation, including efficient gathering of relevance assessments, the relationship between system effectiveness and user utility, and evaluation across user sessions.
Conclusions. This paper describes test collections which have been widely used in information retrieval evaluation and provide an approach for measuring system effectiveness.

Introduction to information retrieval evaluation

To evaluate means to "ascertain the value or amount of something or to appraise it" (Kiewitt 1979: 3) and involves identifying suitable success criteria that can be measured in some way. Evaluation is important for designing, developing and maintaining effective Information Retrieval (or search) systems as it enables the success of an information retrieval system to be quantified and measured. This can involve evaluating characteristics of the information retrieval system itself, such as its retrieval effectiveness, or assessing consumers' acceptance or satisfaction with the system (Taube 1953). How to conduct information retrieval system evaluation has been an active area of research for the past 50 years and the subject of much discussion and debate (Saracevic 1995, Robertson 2008, Harman 2011). This is due, in part, of the need to incorporate users and user interaction into evaluation studies and the relationship between results of laboratory-based vs. operational tests (Robertson and Hancock-Beaulieu 1992).

Traditionally in information retrieval there has been a strong focus on measuring system effectiveness: the ability of an information retrieval system to discriminate between documents that are relevant or not relevant for a given user query. This focus on the system has, in part, been influenced by the focus of the information retrieval community on the development of retrieval algorithms, together with the organization of large information retrieval evaluation events, such as the Text REtrieval Conference (or TREC) in the USA. Such events have also focused on measuring system effectiveness in a controlled experimental setting (Robertson 2008, Voorhees and Harman 2005). Efficiency of an information retrieval system has also been assessed, e.g. measuring how long the system takes to return results or memory/disk space required to store the index. Measuring the effectiveness and efficiency of an information retrieval system has commonly been conducted in a laboratory setting, with little involvement of end users and focused on assessing the performance of the underlying search algorithms; therefore, this is commonly referred to as system-oriented evaluation.

Robertson and Hancock-Beaulieu (1992) suggest that the scope of 'system' in information retrieval has slowly broadened to include more elements of the retrieval context, such as the user or the user's environment, which must be included in the evaluation of information retrieval systems. Similar remarks have been made by Saracevic (1995) and Ingwersen and Järvelin (2005). Therefore, instead of focusing on just the system (i.e., its inputs and outputs), a more user-oriented approach can be taken. This may take into account the user, the user's context and situation, and their interactions with an information retrieval system, perhaps in a real-life operational environment (Borlund 2009). This includes, for example, assessing the usability of the search interface or measuring aspects of the user's information searching behaviour (e.g., a user's satisfaction with the search results or the number of items viewed/saved). Taking into account the wider context of search is part of interactive information retrieval evaluation (Kelly 2009).

In practice it is common to utilize various evaluation approaches that will be used throughout the development of an information retrieval system, from using test collections to develop, contrast and optimize search algorithms; to conducting laboratory-based user experiments for improving design of the user interface; to evaluation carried out in situ as the information retrieval system is used in a practical setting. Järvelin (2011) contrasts system- and user-oriented approaches to information retrieval evaluation in more controlled and artificial settings with the evaluation of information retrieval systems in operational real-life settings, where it is possible to employ both user- and system-oriented approaches. The rest of this chapter focuses on system-oriented approaches to information retrieval evaluation using test collections, also referred to as the Cranfield approach.

Evaluation of information retrieval systems using test collections

For decades the primary approach to evaluation has been system-oriented, focusing on assessing how well a system can find documents of interest given a specification of the user's information need. One of the most used methodologies for conducting experiments that can be repeated and conducted in a controlled laboratory-based setting is test collection-based evaluation (Robertson 2008, Sanderson 2010, Järvelin 2011, Harman 2011). This approach to evaluation has its origins in experiments conducted at Cranfield library in the UK, which ran between 1958 and 1966, and is often referred to as the Cranfield approach or methodology (Cleverdon 1991). The aim of the Cranfield experiments was to create "a laboratory type situation where, freed as far as possible from the combination of operational variables, the performance of index languages could be considered in isolation" (Cleverdon 1967). This approach defined the need for a common document collection, query tasks and ground truths to evaluate different indexing strategies under controlled conditions abstracted away from the operational environment.

The Cranfield approach to information retrieval evaluation uses test collections: re-useable and standardised resources that can be used to evaluate information retrieval systems with respect to system. The main components of an information retrieval test collection are the document collection, topics, and relevance assessments. These, together with evaluation measures, simulate the users of a search system in an operational setting and enable the effectiveness of an information retrieval system to be quantified. Evaluating information retrieval systems in this manner enables the comparison of different search algorithms and the effects on altering algorithm parameters to be systematically observed and quantified. Although proposed in the 1960s, this approach was popularised through the NIST-funded TREC series of large-scale evaluation campaigns that began in 1992, and subsequently other large-scale evaluation campaigns (e.g., CLEF, NTCIR and FIRE), that have stimulated significant developments in information retrieval over the past 20 years (Voorhees and Harman 2005).

The most common way of using the Cranfield approach is to compare various retrieval strategies or systems, which is referred to as comparative evaluation. In this case the focus is on the relative performance between systems, rather than absolute scores of system effectiveness. To evaluate using the Cranfield approach typically requires these stages: (1) select different retrieval strategies or systems to compare; (2) use these to produce ranked lists of documents (often called runs) for each query (often called topics); (3) compute the effectiveness of each strategy for every query in the test collection as a function of relevant documents retrieved; (4) average the scores over all queries to compute overall effectiveness of the strategy or system; (5) use the scores to rank the strategies/systems relative to each other. In addition, statistical tests may be used to determine whether the differences between effectiveness scores for strategies/systems and their rankings are significant. This is necessary if one wants to determine the 'best' approach. In the TREC-style version of the Cranfield approach, there is a further stage required prior to (2) above, whereby the runs for each query are used to create a pool of documents that are judged for relevance, often by domain experts. This produces a list of relevant documents (often called qrels) for each query that is required in computing system effectiveness with relevance-based measures (e.g., precision and recall).

Test collection-based evaluation is highly popular as a method for developing retrieval strategies. Benchmarks can be used by multiple researchers to evaluate in a standardised manner and with the same experimental set up, thereby enabling the comparison of results. In addition, user-oriented evaluation, although highly beneficial, is costly and complex and often difficult to replicate. It is this stability and standardization that makes the test collection so attractive. However, there are a number of limitations to test collection-based evaluation due to its abstraction from reality (Ingwersen and Järvelin 2005, pp. 6-9). Test collections experiments make a number of assumptions: that the relevance of documents is independent of each other; that all documents are equally important; that the user's information need remains static; that a single set of judgments for a query is representative of the user population; that the lists of relevant documents for each query are exhaustive (Voorhees 2002).

Building test collections

A test collection usually consists of a document collection, a set of topics that describe a user's information need and a set of relevance judgments indicating which documents in the collection are relevant to each topic. When constructing a test collection there are typically a number of practical issues that must be addressed (Sanderson and Braschler 2009). By modifying the components of a test collection and evaluation measures used, different retrieval problems and domains can be simulated. The original and most common problem modelled is ad hoc retrieval: the situation in which an information retrieval system is presented with a previously unseen query. However, test collection-based evaluations have also been carried out on tasks including question answering, information filtering, text summarization, topic detection and tracking, image and video retrieval, and text summarization.

Document collection

IR systems index documents that are retrieved in response to users' queries. A test collection must contain a static set of documents that should reflect the kinds of documents likely to be found in the operational setting or domain. This might involve digital library collections or sets of Web pages; texts or multimedia items (e.g., images and videos). The notion of a static document collection is important as it ensures that results can be reproduced upon re-use of the test collection. Specific questions that might be considered when gathering documents include:


Information retrieval systems are evaluated for how well they answer users' search requests. In the case of ad hoc retrieval, the test collection must contain a set of statements that describe typical users' information needs. These might be expressed as queries that are submitted to an information retrieval system, questions or longer written descriptions. For example, TREC uses the notion of a topic, which typically consists of three fields: query, title and description. The query field represents a typical set of keywords a user might issue for a given topic. The title field provides a longer description, normally a sentence, of an information need. The description field describes in more detail a user's information and what they are attempting to find. This often includes a description of what constitutes relevant (and non-relevant) documents and may be used by people judging relevance (if not the person who generated the topic). Practical issues that often arise during creation of topics include:

Relevance assessments

For each topic in the test collection, a set of relevance judgments must be created indicating which documents in the collection are relevant to each topic. The notion of relevance used in the Cranfield approach is commonly interpreted as topical relevance: whether a document contains information on the same topic as the query. In addition, relevance is assumed to be consistent across assessors and static across judgments. However, this is a narrow view of relevance, which has been shown to be subjective, situational and multi-dimensional (Schamber 1994). Some have speculated that this variability would assess the accuracy of the measurement of retrieval effectiveness. A series of experiments were conducted to test this hypothesis (Cleverdon 1970, Voorhees 1998) with results showing that despite there being marked differences in the documents that different assessors judged as relevant or non-relevant, the differences did not substantially affect the relative ordering of information retrieval systems being measured using the different assessments.

Relevance judgments can be binary (relevant or not relevant) or use graded relevance judgments, e.g. highly relevant, partially relevant or non-relevant. The use of binary versus graded relevance judgments is important as it has implications for which evaluation measures can be used to evaluate the information retrieval systems. Commonly the assessors will be the people who originally created the topics, but this is not always the case. For example, relevance assessments might be gathered in a distributed way using multiple assessors and crowdsourcing. It must, however, be recognised that domain expertise can affect the quality of relevance assessments obtained (Bailey et al. 2008, Kinney et al. 2008).

There are various ways of gathering the relevance assessments. For example, in TREC the common approach is to gather the top n results from the different information retrieval systems under test for each topic and aggregate results into a single list of results for judging (called pooling). This assumes that the result lists of different information retrieval systems are diverse and therefore will bring relevant documents into the pool. The relevance assessors then go through the pool and make relevance judgments on each document which can then be used to compute system effectiveness. Documents which are not judged are often categorised as not relevant. An issue with pooling is the completeness of relevance assessments. Ideally for each topic one should find all relevant documents in the document collection; however, pooling may only find a subset. Approaches to help overcome this include using results lists from searches conducted manually in the pool of documents for assessment, or supplementing the sets of relevance judgments with additional relevant documents discovered during further manual. Generating complete sets of relevance judgments helps to ensure that when evaluating future systems, improvements in results can be detected. Generating relevance assessment is often highly time-consuming and labor intensive. This often leads to a bottleneck in the creation of test collections. Various techniques have been proposed to make the process of relevance assessment more efficient. Practical issues that often arise during relevance assessments include:

Assessing system effectiveness: evaluation measures

Evaluation measures provide a way of quantifying retrieval effectiveness (Manning et al. 2008, Croft et al. 2009). Together, the test collection and evaluation measure provide a simulation of the user of an information retrieval system. For example, in the case of ad hoc retrieval the user is modelled as submitting a single query and being presented with a ranked list of results. One assumes that the user then starts at the top of the ranked list and works their way down examining each document in turn for relevance. This, of course, is an estimation of how users behave; in practice they are often far less predictable. There are also further complications that must be considered. For example, research has shown that users are more likely to select documents higher up in the ranking (rank bias); measures typically assume that no connection exists between retrieved documents (independence assumption); and particularly in the case of Web search, one must decide how to deal with duplicate documents: should they be counted as relevant or ignored as they have been previously seen?

Set-based measures

Two simple measures developed early on were precision and recall. These are set-based measures: documents in the ranking are treated as unique and the ordering of results is ignored. Precision measures the fraction of retrieved documents that are relevant; recall measures the fraction of relevant documents that are retrieved. Precision and recall hold an approximate inverse relationship: higher precision is often coupled with lower recall. However, this is not always the case as it has been shown that precision is affected by the retrieval of non-relevant documents; recall is not. Compared to other evaluation measures, precision is simple to compute because one only considers the set of retrieved documents (as long as relevance can be judged). However, to compute recall requires comparing the set of retrieved documents with the entire collection, which is impossible in many cases (e.g., for Web search). In this situation techniques, such as pooling, are used.

Often preference is given to either precision or recall. For example, in Web search the focus is typically on obtaining high precision by finding as many relevant documents in the top n results. However, there are certain domains, such as patent search, where the focus is on finding all relevant documents through an exhaustive search. Alternative recall-oriented measures can then be employed (Magdy and Jones 2010). Scores for precision and recall are often combined into a single measure to allow the comparison of information retrieval systems. Example measures include the e and f measures (van Rijsbergen 1979).

Rank-based measures

More commonly used measures are based on evaluating ranked retrieval results, where importance is placed, not only on obtaining the maximum number of relevant documents, but also for returning relevant documents higher in the ranked list. A common way to evaluate ranked outputs is to compute precision at various levels of recall (e.g., 0.0, 0.1, 0.2, ... 1.0), or at the rank positions of all the relevant documents and the scores averaged (referred to as average precision). This can be computed across multiple queries by taking the arithmetic mean of average precision values for individual topics. This single-figure measure of precision across relevant documents and multiple queries is referred to as mean average precision (or MAP). A further common measure is precision at a fixed rank position, for example Precision at rank 10 (P10 or P@10). Because the number of relevant documents can influence the P@10 score, an alternative measure called R-precision can be used: precision is measured at the rank position Rq, the total number of relevant documents for query q.

More recently, measures based on non-binary (or graded) relevance judgments have been utilised, such as discounted cumulative gain (Järvelin and Kekäläinen 2002). In such measures, each document is given a score indicating relevance (e.g., relevant=2; partially-relevant=1; non-relevant=0). Discounted cumulative gain computes a value for the number of relevant documents retrieved that includes a discount function to progressively reduce the importance of relevant documents found further down the ranked results list. This simulates the assumption that users prefer relevant documents higher in the ranked list. The measure also makes the assumption that highly relevant documents are more useful than partially relevant documents, which in turn are more useful than non-relevant documents. The score can be normalised to provide a value in the range 0 to 1, known as normalised DCG (nDCG). The measure can be averaged across multiple topics similar to computing mean average precision, and it has also been extended to compute the value of retrieved results across multiple queries in a session, referred to as normalised session discounted cumulative gain or nsDCG (Järvelin et al. 2008).

Other measures

Additional measures have been developed to evaluate different information retrieval problems. For example, to measure the success of search tasks where just one relevant document is required (known-item search), measures, such as mean reciprocal rank (MRR), can be used. Another problem requiring bespoke evaluation measures is the assessment of the variability or diversity of results. In these cases one might want to evaluate the different aspects (or sub-topics) of query and therefore measures such as S-recall (Zhai et al. 2003) were created. This measure promotes systems that return different relevant documents rather than all from the same (or similar) topic or aspect.

In practice it is important to select an evaluation measure that is suitable for the given task; for example, if the problem is known-item search then the mean reciprocal rank would be appropriate; for an ad hoc search task then mean average precision or averaged normalised discounted cumulative gain would be applicable.

Comparing results

It is common to find in practice that retrieved results are compared against each other (comparative evaluation) with the runs with the highest scores deemed as the 'best'. In this case an absolute score of retrieval performance is of less importance than scores relative to each other. One might compare sets of results produced by running the same system multiple times with varying parameter settings, or taking single runs from multiple systems and comparing them to determine the best search system. Typically evaluation measures are computed across multiple queries (or topics) and averaged to produce a final score. When comparing systems, significance testing should be used to determine whether one ranking is actually better than another rather than due to chance. Commonly used tests are Kendall's Tau and the Wilcoxon signed-rank test (Sanderson 2010: 63-73).

Current and future research directions

Laboratory-based information retrieval evaluation using test collections is a popular, and useful, approach to evaluation. However, information retrieval researchers have recognised the need to update the original Cranfield approach to allow the evaluation of new information retrieval systems that deal with varying search problems and domains (Kamps et al. 2009). Research is ongoing to tackle a range of issues in information retrieval evaluation using test collections and we discuss three examples in which we have been personally involved: gathering relevance assessments efficiently, comparing system effectiveness and user utility; and evaluating information retrieval systems over sessions. Further avenues of research include the use of simulations (Azzopardi et al. 2010), and the development of new information retrieval evaluation measures (Yilmaz et al. 2010, Smucker and Clarke 2012).

Gathering relevance assessments efficiently

One area receiving recent attention by the information retrieval research community is how to create test collections efficiently. Given the bottleneck in gathering relevance judgments, 'low-cost evaluation' techniques have been proposed. These include approaches based on focusing assessor effort on runs from particular systems or topics that are likely to contain more relevant documents (Zobel 1998), sampling documents from the pool (Aslam et al. 2006), supplementing pools with relevant documents found by manually searching the document collection with an information retrieval system, known as interactive search and judge or ISJ (Cormack et al. 1998) and simulating queries and relevance assessments based on user's queries and clicks in search logs (Zhang and Kamps 2010).

One particular approach that has received interest recently has been the use of crowdsourcing: the act of taking a job traditionally performed by a designated person and outsourcing to an undefined, generally large, group of people in the form of an open call. Amazon Mechanical Turk (AMT) is one such example of a crowdsourcing platform. This system has around 200,000 workers from many countries that perform human intelligence tasks. Recent research has demonstrated that crowdsourcing is feasible for gathering relevance assessments (Alonso and Mizzaro 2009, Kazai 2011, Carvalho et al. 2011). However, previous studies have also demonstrated that domain expertise can have an impact on the quality and reliability of relevance judgments (Bailey et al. 2008, Kinney et al. 2008) and is particularly problematic when using crowdsourcing in specialised domains (Clough et al. 2012).

System effectiveness versus user utility

The effectiveness of information retrieval systems is typically measured based on the number of relevant items found. The test collection and measure aim to simulate users in an operational setting with the assumption that if System A were to score higher than System B on a test collection then users would prefer or be more satisfied with System A over System B. This is important because if this were not the case then test collection-based evaluation would be limited: improvements in system effectiveness in the laboratory would not necessarily translate to improvements in an operational setting and thereby benefit the user.

Several studies have been undertaken to investigate the link between system effectiveness and user preference or satisfaction. Results from these studies have differed widely: some have shown that an increase in system effectiveness did not have detectable gains for the end user in practice (Hersh et al. 2000); others have demonstrated that when comparing two systems with large relative differences in system effectiveness and user preferences for which system output they prefer, results were strongly correlated (Al-Maskari et al. 2008, Sanderson et al. 2010). Ingwersen and Järvelin (2005) assert that the real issue in information retrieval system design is not whether precision and recall goes up, but rather whether the system helps users perform search tasks more effectively. There are still many unanswered questions about the link between system- and user-oriented measures of search effectiveness and satisfaction.

Evaluation across user sessions

Test collection-based evaluation to date has focused on evaluating information retrieval systems in a single query-response manner. However, in practice users typically reformulate their queries in response to results from the information retrieval system or as their information need alters during a search episode (Bates 1989; Jansen et al. 2009). This requires re-thinking information retrieval evaluation to compute system success over multiple query-response interactions (Järvelin 2009, Keskustalo et al. 2009, Kanoulas et al. 2011). For example, traditional evaluation measures, such as precision and recall, are not valid for multiple queries; therefore proposals have been made to extend existing measures. For example, Järvelin et al. (2008) extend the normalised discounted cumulative gain (nDCG) measure to a measure that considered multi-query sessions called normalised session discounted cumulative gain (nsDCG); Kanoulas et al. (2011) generalize traditional evaluation measures such as average precision to multi-query session evaluation. Further aspects that must be considered are whether or not to include duplicate documents when evaluating multiple query-responses.

An example of a large-scale evaluation effort to evaluate information retrieval systems over multi-query sessions is the TREC Session Track, which has run since 2010. The TREC organisers have provided the resources and evaluation measures to assess whether systems with prior knowledge of a user's search behaviour can provide more effective search results for subsequent queries (Kanoulas et al. 2012). The test collection for session evaluation consists of a document collection, topics, and relevance assessments. However, the test collection for sessions also contains information from real user sessions, such as retrieved results, the items clicked on by users and the length of time spent viewing results and selected items. Results have broadly shown that information retrieval results can be improved for a given query with users' interaction data and that the more data used the higher the performance obtained.


Like conducting any information retrieval experiment, the use of a test collection-based approach must be planned and decisions made with respect to the experimental design (Tague-Sutcliffe 1996). The goals of the evaluation must be defined; a suitable test collection must be selected from those already in existence, or must be created specifically for the retrieval problem being addressed; different information retrieval systems or techniques must be developed or chosen for testing and comparing; evaluation measures and statistical tests must be selected for evaluating the performance of the information retrieval system of for comparing whether one version of the system is better than another. These decisions are important, as they will affect the quality of the benchmark and impact on the accuracy and usefulness of results.

Over the years, test collection-based evaluation has been highly influential in shaping the core components of modern information retrieval systems, including Web search. This has been particularly visible in the context of TREC and related studies, which have not only provided the necessary benchmarks to compare different approaches to retrieval, but also provided a community and forum in which it can be discussed and promoted. The Cranfield approach to information retrieval evaluation has remained popular for more than fifty years although it is important to remember the original purpose of test collection experimentation for information retrieval: to develop and optimize algorithms for locating and ranking a set of documents about the same topic to a given query. Indexing, matching and ranking documents with respect to queries are the basic requirements of an information retrieval algorithm, which can be tested in a laboratory using test collections. There are widely recognised weaknesses to the Cranfield approach that will provide the research community with many avenues for further research in coming years.

About the authors

Paul Clough is a Senior Lecturer in the Information School at the University of Sheffield. He received a B.Eng. (hons) degree in Computer Science from the University of York in 1998 and a Ph.D. from the University of Sheffield in 2002. His research interests mainly revolve around developing technologies to assist people with accessing and managing information. Paul can be contacted at p.d.clough@sheffield.ac.uk

Mark Sanderson is a Professor in the School of Computer Science and Information Technology at RMIT University in Melbourne, Australia. He received his Bachelor's and Doctorate degrees in Computer Science at the University of Glasgow, Scotland. He can be contacted at mark.sanderson@rmit.edu.au

  • Al-Maskari, A., Sanderson, M., Clough, P. & Airio, E. (2008). The good and the bad system: does the test collection predict users' effectiveness? In Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. (pp. 59-66). New York, NY: ACM Press. Retrieved 5 May, 2013 from http://ir.shef.ac.uk/cloughie/papers/fp440-almaskari.pdf Archived by WebCite® at http://www.webcitation.org/6H0NCnbHd)
  • Alonso, O. & Mizzaro, S. (2009). Can we get rid of TREC assessors? Using Mechanical Turk for relevance assessment. In Shlomo Geva, Jaap Kamps, Carol Peters, Tetsuya Sakai, Andrew Trotman & Ellen Voorhees, (Eds.). Proceedings of the SIGIR 2009 Workshop on the Future of information retrieval Evaluation. (pp. 15-16). Amsterdam: IR Publications. Retrieved 5 May, 2013 from http://staff.science.uva.nl/%7Ekamps/ireval/papers/paper_22.pdf (Archived by WebCite® at http://www.webcitation.org/6H1mV6taR)
  • Aslam, J.A., Pavlu, V. & Yilmaz, E. (2006). A statistical method for system evaluation using incomplete judgments. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, (pp. 541-548). New York, NY: ACM Press
  • Azzopardi, L., Järvelin, K., Kamps, J., & Smucker, M.D. (2010). . SIGIR Forum, 44(2), 35-47. Retrieved 5 May, 2013 from http://www.sigir.org/forum/2010D/sigirwksp/2010d_sigirforum_azzopardi.pdf (Archived by WebCite® at http://www.webcitation.org/6H1miejTW)
  • Bailey, P., Craswell, N., Soboroff, I., Thomas, P., de Vries, A.P. & Yilmaz, E. (2008). Relevance assessment: are judges exchangeable and does it matter? In Sung-Hyon Myaeng, Douglas W. Oard, Fabrizio Sebastiani, Tat-Seng Chua, Mun-Kew Leong (Eds.). Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Singapore, July 20-24, 2008.. (pp. 667-674). New York, NY: ACM Press. Retrieved 5 May, 2013 from http://es.csiro.au/pubs/bailey_sigir08.pdf (Archived by WebCite® at http://www.webcitation.org/6GOBCHiqK)
  • Bates, M.J. (1989). The design of browsing and berrypicking techniques for the online search interface. Online Review, 13(5), 407-424. Retrieved 5 May, 2013 from http://pages.gseis.ucla.edu/faculty/bates/berrypicking.html Archived by WebCite® at http://www.webcitation.org/6H1mrhwxF)
  • Borlund: (2009). User-centred evaluation of information retrieval systems. In A. Göker and J. Davies (eds), Information retrieval: searching in the 21st Century, Chicester, UK: John Wiley & Sons
  • Carterette, B., Pavlu, V., Kanoulas, E., Aslam, J.A. & Allan, J. (2008). Evaluation over thousands of queries. In Sung-Hyon Myaeng, Douglas W. Oard, Fabrizio Sebastiani, Tat-Seng Chua, Mun-Kew Leong (Eds.). Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Singapore, July 20-24, 2008.. (pp. 651-658). New York, NY: ACM Press. Retrieved 5 May, 2013 from http://ir.cis.udel.edu/~carteret/papers/sigir08.pdf Archived by WebCite® at http://www.webcitation.org/6H1n04iqc)
  • Carvalho, V. R., Lease, M. & Yilmaz, E. (2011). Crowdsourcing for search evaluation. SIGIR Forum, 44(2), 17-22. Retrieved 5 May, 2013 from http://www.sigir.org/forum/2010D/sigirwksp/2010d_sigirforum_carvalho.pdf Archived by WebCite® at http://www.webcitation.org/6H1n8yVN1)
  • Cleverdon, C.W. (1967). The Cranfield tests on index language devices. Aslib Proceedings, 19(6), 173-192
  • Cleverdon, C. W. (1970). The effect of variations in relevance assessments in comparative experimental tests of index languages. Cranfield, UK: Cranfield Institute of Technology. (Cranfield Library Report, No. 3)
  • Cleverdon, C.W. (1991). The significance of the Cranfield tests on index languages. In Abraham Bookstein, Yves Chiaramella, Gerard Salton, Vijay V. Raghavan (Eds.). Proceedings of 14th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Chicago, IL, USA October 13 - 16, 1991. (pp. 3-12). New York, NY: ACM Press
  • Clough, P., Sanderson, M., Tang, J., Gollins, T. & Warner, A. (2012). Examining the limits of crowdsourcing for relevance assessment. IEEE Internet Computing, In Print. Retrieved 5 May, 2013 from http://www.seg.rmit.edu.au/mark/publications/my_papers/IEEE-IC-2012.pdf Archived by WebCite® at http://www.webcitation.org/6H1nE0LNX)
  • Cormack, G.V., Palmer, C.R. & Clarke, C.L.A. (1998). Efficient construction of large test collections. In Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, August 24-28 1998, Melbourne, Australia.. (pp. 282-289). New York, NY: ACM Press.
  • Croft, B., Metzler, D. & Strohman, T. (2009). Search engines: information retrieval in practice. Boston, MA, Pearson Education
  • Harman, D. (2011). Information retrieval evaluation. San Raphael, CA: Morgan & Claypool Publishers. (Synthesis Lectures on Information Concepts, Retrieval, and Services, Volume 3, No. 2).
  • Hersh, W., Turpin, A., Price, S., Chan, B., Kramer, D., Sacherek, L. & Olson, D. (2000). Do batch and user evaluations give the same results? In Nicholas J. Belkin, Peter Ingwersen, Mun-Kew Leong (Eds.). Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, July 24-28, 2000, Athens, Greece.. (pp. 17-24). New York, NY: ACM Press. Retrieved 5 May, 2013 from http://skynet.ohsu.edu/~hersh/sigir-00-batcheval.pdf Archived by WebCite® at http://www.webcitation.org/6H1nNaYid)
  • Ingwersen: & Järvelin, K. (2005). The turn: integration of information seeking and retrieval in context, New York, NY: Springer-Verlag
  • Jansen, B.J., Booth, D.L., & Spink, A. (2009). Patterns of query reformulation during web searching. Journal of the American Society for Information Science and Technology, 60(7), 1358-1371. Retrieved 5 May, 2013 from http://faculty.ist.psu.edu/jjansen/academic/pubs/jansen_patterns_query_reformulation.pdf Archived by WebCite® at http://www.webcitation.org/6H1nU1bRD)
  • Järvelin, K. & Kekäläinen, J. (2002). Cumulated gain-based evaluation of information retrieval techniques. ACM Transactions on Information Systems, 20(4), 422-446
  • Järvelin, K., Price, S.L., Delcambre, L.M.L. & Nielsen, M.L. (2008). Discounted cumulated gain based evaluation of multiple-query IR sessions. In Proceedings of the 30th European Conference on Advances in Information Retrieval. (pp. 4-15). Berlin: Springer. (Lecture Notes in Computer Science, 4956).
  • Järvelin, K. (2009). Explaining user performance in information retrieval: challenges to IR evaluation. In Proceedings of the 2nd International Conference on Theory of Information Retrieval: Advances in Information Retrieval Theory. (pp. 289-296). Berlin: Springer.
  • Järvelin, K. (2011). Evaluation. In Ruthven, I. and Kelly, D. (eds.), Interactive information seeking, behaviour and retrieval. (pp. 113-138). London, UK: Facet Publishing.
  • Kamps, J., Geva, S., Peters, C., Sakai, T., Trotman, A. & Voorhees, E. (2009). Report on the SIGIR 2009 workshop on the future of information retrieval evaluation. SIGIR Forum, 43(2), 13-23. Retrieved 5 May, 2013 from http://www.sigir.org/forum/2009D/sigirwksp/2009d_sigirforum_kamps.pdf Archived by WebCite® at http://www.webcitation.org/6H1nmVq7g)
  • Kanoulas, E., Carterette, B., Clough, P. & Sanderson, M. (2011). Evaluating multi-query sessions. In Wei-Ying Ma, Jian-Yun Nie, Ricardo A. Baeza-Yates, Tat-Seng Chua, W. Bruce Croft (Eds.). Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, Beijing, China July 24 - 28, 2011. (pp. 1053-1062). New York, NY: ACM Press. Retrieved 5 May, 2013 from http://ir.cis.udel.edu/~carteret/papers/sigir11b.pdf Archived by WebCite® at http://www.webcitation.org/6H1nvBDqm)
  • Kanoulas, E., Carterette, B., Hall, M., Clough: & Sanderson, M. (2012). Session Track 2012 Overview. In Proceedings of the 21st Text REtrieval Conference. Washington, DC: National Institute of Standards and Technology
  • Kazai, G. (2011). In search of quality in crowdsourcing for search engine evaluation. In Proceedings of the 33rd European Conference on Advances in Information Retrieval. (pp. 165-176). Berlin: Springer,
  • Kelly D. (2009). Methods for evaluating interactive information retrieval systems with users. Foundations and Trends in Information Retrieval, 3(1-2), 1-224. Retrieved 5 May, 2013 from http://www.ils.unc.edu/~dianek/FnTIR-Press-Kelly.pdf Archived by WebCite® at http://www.webcitation.org/6H1o2TKSU)
  • Keskustalo, H., Järvelin, K., Pirkola, A., Sharma, T., and Lykke, M. (2009). Test Collection-Based information retrieval Evaluation Needs Extension toward Sessions - A Case of Extremely Short Queries. In Proceedings of the 5th Asia Information Retrieval Symposium on Information Retrieval Technology, (pp. 63-74). Berlin:Springer. Retrieved 5 May, 2013 from http://www.sis.uta.fi/infim/julkaisut/fire/2009/HK-SessEval-AIRS09-prep.pdf Archived by WebCite® at http://www.webcitation.org/6H1o7fRB8)
  • Kiewitt, E.L. (1979). Evaluating information retrieval systems: the PROBE program. Westport, CT: Greenwood Press
  • Kinney, K. A., Huffman, S.B. & Zhai, J. (2008). How evaluator domain expertise affects search result relevance judgments. In Proceedings of the 17th ACM Conference on Information and Knowledge Management. (pp. 591-598). New York, NY: ACM Press.
  • Magdy, W., & Jones, G. (2010). PRES: a score metric for evaluating recall-oriented information retrieval applications. In Proceedings of the 33rd International ACM SIGIR Conference on Research and Ddevelopment in Information Retrieval, Geneva, Switzerland July 19 - 23, 2010. (pp. 611-618). New York, NY: ACM Press.
  • Manning, C.D., Raghavan: & Schütze, H. (2008). Introduction to information retrieval. Cambridge: Cambridge University Press. Retrieved 5 May, 2013 from http://nlp.stanford.edu/IR-book/ Archived by WebCite® at http://www.webcitation.org/6H1okAxEy)
  • Mizzaro, S. & Robertson, S. (2007). HITS hits TREC: exploring IR evaluation results with network analysis. In Wessel Kraaij, Arjen P. de Vries, Charles L. A. Clarke, Norbert Fuhr, Noriko Kando (Eds.). Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Amsterdam, The Netherlands, July 23-27, 2007. (pp. 479-486). New York, NY: ACM Press. Retrieved 5 May, 2013 from http://www.soi.city.ac.uk/~ser/papers/MizzaroRobertsonSIGIR07.pdf Archived by WebCite® at http://www.webcitation.org/6H1oEKke1)
  • Robertson, S. (2008). On the history of evaluation in IR. Journal of Information Science, 34(4), 439-456
  • Robertson, S.E. & Hancock-Beaulieu, M. (1992). On the evaluation of information retrieval systems. Information Processing and Management, 28(4), 457-466
  • Sanderson, M. (2010). Test collection based evaluation of information retrieval systems. Foundations and Trends in Information Retrieval, 4(4), 247-375. Retrieved 5 May, 2013 from http://www.seg.rmit.edu.au/mark/publications/my_papers/FnTIR.pdf Archived by WebCite® at http://www.webcitation.org/6H1oISFSP)
  • Sanderson, M. & Braschler, M. (2009). Best practices for test collection creation and information retrieval system evaluation. Pisa, Italy: TrebleCLEF. (TrebleCLEF technical report). Retrieved 5 May, 2013 from http://www.seg.rmit.edu.au/mark/publications/my_papers/T-CLEF-test-collection-report.pdf Archived by WebCite® at http://www.webcitation.org/6H1oLiTdt)
  • Sanderson, M., Paramita, M., Clough, P. & Kanoulas, E. (2010). Do user preferences and evaluation measures line up? In Fabio Crestani, Stéphane Marchand-Maillet, Hsin-Hsi Chen, Efthimis N. Efthimiadis, Jacques Savoy (Eds.). Proceedings of the 33rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Geneva, Switzerland, July 19-23, 2010.. (pp. 555-562). New York, NY: ACM Press. Retrieved 5 May, 2013 from http://ir.shef.ac.uk/cloughie/papers/p555-sanderson.pdf Archived by WebCite® at http://www.webcitation.org/6H1oSVAAf)
  • Saracevic, T. (1995). Evaluation of evaluation in information retrieval. In Edward A. Fox, Peter Ingwersen, Raya Fidel (Eds.). Proceedings of 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Seattle, Washington, USA, July 9-13, 1995. (pp. 138-146). New York, NY: ACM Press. Retrieved 5 May, 2013 from http://comminfo.rutgers.edu/~muresan/IR/Docs/Articles/sigirSaracevic1995.pdf Archived by WebCite® at http://www.webcitation.org/6H1oVU1EI)
  • Schamber, L. (1994). Relevance and information behavior. Annual Review of Information Science and Technology, 29, 3-48
  • Smucker, M.D. & Clarke, C.L.A. (2012). Time-based calibration of effectiveness measures. In William R. Hersh, Jamie Callan, Yoelle Maarek, Mark Sanderson (Eds.). Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, Portland, OR, USA.. (pp. 95-104). New York, NY: ACM Press. Retrieved 5 May, 2013 from http://www.mansci.uwaterloo.ca/~msmucker/publications/smucker-clarke-sigir2012.pdf Archived by WebCite® at http://www.webcitation.org/6H1oa1knA)
  • Tague-Sutcliffe, J.M. (1996). Some perspectives on the evaluation of information retrieval systems. Journal of the American Society for Information Science, 47(1), 1-3. Retrieved 5 May, 2013 from http://comminfo.rutgers.edu/~muresan/IR/Docs/Articles/jasisTagueSutcliffe1996.pdf Archived by WebCite® at http://www.webcitation.org/6H1oe6O3B)
  • Taube, M. (1956). Cost as the measure of efficiency of storage and retrieval systems. In Mortimer Taube, Studies in Coordinate Indexing Volume 3. (pp. 18-33). Washington, DC: Documentation Inc.
  • Turpin, A. & Hersh, W. (2001). Why batch and user evaluations do not give the same results. In W. Bruce Croft, David J. Harper, Donald H. Kraft, Justin Zobel (Eds.). Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, September 9-13, 2001, New Orleans, Louisiana, USA. (pp. 225-231). New York, NY: ACM Press.
  • van Rijsbergen, C.J. (1979). Information retrieval. (2nd ed.) London: Butterworths. Retrieved 5 May, 2013 from http://www.dcs.gla.ac.uk/Keith/Preface.html Archived by WebCite® at http://www.webcitation.org/6H1ool2uv)
  • Voorhees, E. M. (1998). Variations in relevance judgments and the measurement of retrieval effectiveness. In Hans-Peter Frei, Donna Harman, Peter Schäuble, Ross Wilkinson (Eds.). Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, August 18-22, 1996, Zurich, Switzerland. (pp. 315-323). New York, NY: ACM Press. Retrieved 5 May, 2013 from http://nlp.cs.swarthmore.edu/~richardw/papers/vorhees1999-variations.pdf Archived by WebCite® at http://www.webcitation.org/6H1otD9Jl)
  • Voorhees, E.M. (2000). Variations in relevance judgments and the measurement of retrieval effectiveness. Information Processing & Management, 36(5), 697-716
  • Voorhees, E.M. (2002). The philosophy of information retrieval evaluation. In Evaluation of cross-language information retrieval systems: 2nd workshop of the Cross-Language Evaluation Forum (CLEF 2001). (pp. 355-370). Berlin: Springer. (Lecture Notes in Computer Science 2406).
  • Voorhees, E.M. & Harman, D.K. (2005). TREC: experiments and evaluation in information retrieval. Cambridge, MA: MIT Press.
  • Yilmaz, E., Shokouhi, M., Craswell, N. & Robertson, S. (2010). Expected browsing utility for web search evaluation. In Proceedings of the 19th ACM International Conference on Information and Knowledge Management. (pp. 1561-1564). New York, NY: ACM Press.
  • Zhai, C., Cohen, W.W. & Lafferty, J. (2003). Beyond independent relevance: methods and evaluation metrics for subtopic retrieval. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, July 28 - August 1, 2003, Toronto, Canada.. (pp. 10-17). New York, NY: ACM Press.
  • Zhang, J. & Kamps, J. (2010). A search log-based approach to evaluation. In Proceedings of the 14th European Conference on Research and Advanced Technology for Digital Libraries. (pp. 248-260). Berlin: Springer. (Lecture Notes in Computer Science, 6273)
  • Zobel, J. (1998). How reliable are the results of large-scale information retrieval experiments? In Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, August 24-28 1998, Melbourne, Australia. (pp. 307-314). New York, NY: ACM Press. Retrieved 5 May, 2013 from http://goanna.cs.rmit.edu.au/~jz/fulltext/sigir98.pdf Archived by WebCite® at http://www.webcitation.org/6H1p6aqJh)
How to cite this paper

Clough, P. & Sanderson, M. (2013). Evaluating the performance of information retrieval systems using test collections Information Research, 18(2) paper 582. [Available at http://InformationR.net/ir/18-2/paper582.html]

Check for citations, using Google Scholar