skip to main content
10.1145/1835449.1835650acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
poster

Learning to select rankers

Published: 19 July 2010 Publication History

Abstract

Combining evidence from multiple retrieval models has been widely studied in the context of of distributed search, metasearch and rank fusion. Much of the prior work has focused on combining retrieval scores (or the rankings) assigned by different retrieval models or ranking algorithms. In this work, we focus on the problem of choosing between retrieval models using performance estimation. We propose modeling the differences in retrieval performance directly by using rank-time features - features that are available to the ranking algorithms - and the retrieval scores assigned by the ranking algorithms. Our experimental results show that when choosing between two rankers, our approach yields significant improvements over the best individual ranker.

References

[1]
B. T. Bartell, G. W. Cottrell, and R. K. Belew. Automatic combination of multiple ranked retrieval systems. In SIGIR '94: Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval, pages 173--181, 1994.
[2]
W. B. Croft. Incorporating different search models into one document retrieval system. SIGIR Forum, 16(1):40--45, 1981.
[3]
S. Cronen-Townsend, Y. Zhou, and W. B. Croft. Predicting query performance. In SIGIR '02: 25th Annual ACM SIGIR Conference Proceedings, pages 299--306, 2002.
[4]
M. Farah and D. Vanderpooten. An outranking approach for rank aggregation in information retrieval. In SIGIR '07: Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, pages 591--598, 2007.
[5]
Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. J. Mach. Learn. Res., 4, 2003.
[6]
J. H. Lee. Combining multiple evidence from different properties ofweighting schemes. In SIGIR '95: Proceedings of the 18th annual international ACM SIGIR conference on Research and development in information retrieval, pages 180--188, 1995.
[7]
D. Lillis, F. Toolan, R. Collier, and J. Dunnion. Probfuse: A probabilistic approach to data fusion. In SIGIR '06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 139--146, 2006.
[8]
T. Liu, J. Xu, T. Qin, W. Xiong, and H. Li. Letor: Benchmark dataset for research on learning to rank for information retrieval. In Proceedings of SIGIR 2007 Workshop on Learning to Rank for Information Retrieval, pages 3--10, 2007.
[9]
M.-F. Tsai, T.-Y. Liu, T. Qin, H.-H. Chen, and W.-Y. Ma. Frank: A ranking method with fidelity loss. In SIGIR '07: Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, pages 383--390, 2007.

Cited By

View all
  • (2023)A selective approach to stemming for minimizing the risk of failure in information retrieval systemsPeerJ Computer Science10.7717/peerj-cs.11759(e1175)Online publication date: 10-Jan-2023
  • (2023)Data Fusion Performance Prophecy: A Random Forest RevelationInformation Integration and Web Intelligence10.1007/978-3-031-48316-5_20(192-200)Online publication date: 22-Nov-2023
  • (2020)On the Evaluation of Data Fusion for Information RetrievalProceedings of the 12th Annual Meeting of the Forum for Information Retrieval Evaluation10.1145/3441501.3441506(54-57)Online publication date: 16-Dec-2020
  • Show More Cited By

Index Terms

  1. Learning to select rankers

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SIGIR '10: Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
    July 2010
    944 pages
    ISBN:9781450301534
    DOI:10.1145/1835449
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 19 July 2010

    Check for updates

    Author Tags

    1. combining searches
    2. learning to rank
    3. metasearch

    Qualifiers

    • Poster

    Conference

    SIGIR '10
    Sponsor:

    Acceptance Rates

    SIGIR '10 Paper Acceptance Rate 87 of 520 submissions, 17%;
    Overall Acceptance Rate 792 of 3,983 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)4
    • Downloads (Last 6 weeks)3
    Reflects downloads up to 18 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)A selective approach to stemming for minimizing the risk of failure in information retrieval systemsPeerJ Computer Science10.7717/peerj-cs.11759(e1175)Online publication date: 10-Jan-2023
    • (2023)Data Fusion Performance Prophecy: A Random Forest RevelationInformation Integration and Web Intelligence10.1007/978-3-031-48316-5_20(192-200)Online publication date: 22-Nov-2023
    • (2020)On the Evaluation of Data Fusion for Information RetrievalProceedings of the 12th Annual Meeting of the Forum for Information Retrieval Evaluation10.1145/3441501.3441506(54-57)Online publication date: 16-Dec-2020
    • (2019)Information Needs, Queries, and Query Performance PredictionProceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3331184.3331253(395-404)Online publication date: 18-Jul-2019
    • (2018)Fusion in Information RetrievalThe 41st International ACM SIGIR Conference on Research & Development in Information Retrieval10.1145/3209978.3210186(1383-1386)Online publication date: 27-Jun-2018
    • (2018)Query Performance Prediction Focused on Summarized Letor FeaturesThe 41st International ACM SIGIR Conference on Research & Development in Information Retrieval10.1145/3209978.3210121(1177-1180)Online publication date: 27-Jun-2018
    • (2018)Selective Cluster Presentation on the Search Results PageACM Transactions on Information Systems10.1145/315867236:3(1-42)Online publication date: 28-Feb-2018
    • (2018)A selective approach to index term weighting for robust information retrieval based on the frequency distributions of query termsInformation Retrieval Journal10.1007/s10791-018-9347-922:6(543-569)Online publication date: 13-Dec-2018
    • (2017)Tasks, Queries, and Rankers in Pre-Retrieval Performance PredictionProceedings of the 22nd Australasian Document Computing Symposium10.1145/3166072.3166079(1-4)Online publication date: 7-Dec-2017
    • (2016)A Probabilistic Fusion FrameworkProceedings of the 25th ACM International on Conference on Information and Knowledge Management10.1145/2983323.2983739(1463-1472)Online publication date: 24-Oct-2016
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media