skip to main content
10.1145/860435.860510acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
Article

Automatic ranking of retrieval systems in imperfect environments

Published:28 July 2003Publication History

ABSTRACT

The empirical investigation of the effectiveness of information retrieval (IR) systems requires a test collection, a set of query topics, and a set of relevance judgments made by human assessors for each query. Previous experiments show that differences in human relevance assessments do not affect the relative performance of retrieval systems. Based on this observation, we propose and evaluate a new approach to replace the human relevance judgments by an automatic method. Ranking of retrieval systems with our methodology correlates positively and significantly with that of human-based evaluations. In the experiments, we assume a Web-like imperfect environment: the indexing information for all documents is available for ranking, but some documents may not be available for retrieval. Such conditions can be due to document deletions or network problems. Our method of simulating imperfect environments can be used for Web search engine assessment and in estimating the effects of network conditions (e.g., network unreliability) on IR system performance.

References

  1. Chowdhury A., Soboroff I. Automatic evaluation of World Wide Web search services. In the Proceedings of the 2002 ACM SIGIR Conference, 421--422. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Voorhees E.M., Harman, D. Overview of the Fifth Text Retrieval Conference (TREC-5). In E. M. Voorhees and D.K. Harman, editors, The Fifth Text Retrieval Conference, NIST Special Publication 500-238. National Institute of Standards and Technology, Gaithersburg, MD, November 1996.Google ScholarGoogle Scholar
  3. Soboroff, I., Nicholas,C., Cahan, P. Ranking Retrieval Systems without Relevance Judgments. In the Proceedings of the 2001 ACM SIGIR Conference, 66--73. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Automatic ranking of retrieval systems in imperfect environments

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        SIGIR '03: Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
        July 2003
        490 pages
        ISBN:1581136463
        DOI:10.1145/860435

        Copyright © 2003 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 28 July 2003

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • Article

        Acceptance Rates

        SIGIR '03 Paper Acceptance Rate46of266submissions,17%Overall Acceptance Rate792of3,983submissions,20%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader