Building a Common Framework for IIR Evaluation

Hall, Mark and Toms, Elaine (2013) Building a Common Framework for IIR Evaluation. CLEF 2013 - Information Access Evaluation meets Multilinguality, Multimodality, and Visualization, 23-26 September 2013, Valencia, Spain.

[img]
Preview
PDF
halltoms2013.pdf - Accepted Version

Download (344kB)

Abstract

Cranfield-style evaluations standardised Information Retrieval (IR) evaluation practices, enabling the creation of programmes such as TREC, CLEF, and INEX, and long-term comparability of IR systems. However, the methodology does not translate well into the Interactive IR (IIR) domain, where the inclusion of the user into the search process and the repeated interaction between user and system creates more variability than the Cranfield-style evaluations can support. As a result, IIR evaluations of various systems have tended to be non-comparable, not because the systems vary, but because the methodologies used are non-comparable. In this paper we describe a standardised IIR evaluation framework, that ensures that IIR evaluations can share a standardised baseline methodology in much the same way that TREC, CLEF, and INEX imposed a process on IR evaluation. The framework provides a common baseline, derived by integrating existing, validated evaluation measures, that enables inter-study comparison, but is also exible enough to support most kinds of IIR studies. This is achieved through the use of a \pluggable" system, into which any web-based IIR interface can be embedded. The framework has been implemented and the software will be made available to reduce the resource commitment required for IIR studies.

Item Type: Conference or Workshop Item (Paper)
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Divisions: Computing and Information Systems
Related URLs:
Date Deposited: 22 Oct 2013 10:11
URI: http://repository.edgehill.ac.uk/id/eprint/5735

Archive staff only

Item control page Item control page