skip to main content
10.1145/1835449.1835526acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article

Collecting high quality overlapping labels at low cost

Published: 19 July 2010 Publication History

Abstract

This paper studies quality of human labels used to train search engines' rankers. Our specific focus is performance improvements obtained by using overlapping relevance labels, which is by collecting multiple human judgments for each training sample. The paper explores whether, when, and for which samples one should obtain overlapping training labels, as well as how many labels per sample are needed. The proposed selective labeling scheme collects additional labels only for a subset of training samples, specifically for those that are labeled relevant by a judge. Our experiments show that this labeling scheme improves the NDCG of two Web search rankers on several real-world test sets, with a low labeling overhead of around 1.4 labels per sample. This labeling scheme also outperforms several methods of using overlapping labels, such as simple k-overlap, majority vote, the highest labels, etc. Finally, the paper presents a study of how many overlapping labels are needed to get the best improvement in retrieval accuracy.

References

[1]
A. Bernstein and J. Li. From active towards interactive learning: Using consideration information to improve labeling correctness. University of Zurich, Dynamic and distributed information systems group working paper. www.ifi.uzh.ch/ddis/nc/publications.
[2]
C. J. C. Burges, R. Ragno, and Q. V. Le. Learning to rank with nonsmooth cost functions. NIPS'06.
[3]
Kalervo Järvelin and Jaana Kekäläinen. Ir evaluation methods for retrieving highly relevant documents. SIGIR'00.
[4]
NAACL-HLT 2010 Workshop on Amazon Mechanical Turk. http://sites.google.com/site/amtworkshop2010/.
[5]
V. S. Sheng, F. Provost, and P. G. Lpeirotis. Get another label? Improving data quality and data mining using multiple noisy labelers. KDD'08.
[6]
P. Smyth, U. M. Fayyad, M. C. Burl, P. Perona, and P. Baldi. Inferring ground truth from subjective labeling of Venus images. NIPS'94.
[7]
P. Smyth, M. C. Burl, U. M. Fayyad, P. Perona. Knowledge discovery in large image databases: Dealing with uncertainties in ground truth. AAAI Knowledge Discovery in Databases Workshop of KDD'94.
[8]
P. Smyth. Bounds on the mean classification error rate of multiple experts. Pattern Recognition Letters 17, 12. 1996.
[9]
R. Snow, B. O'Connor, D. Jurafsky and A. Y. Ng. 2008. Cheap and Fast - But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks. EMNLP'08.
[10]
E. M. Voorhees. Evaluating by highly relevant documents. SIGIR'01.
[11]
Q. Wu, C. J. C. Burges, K. M. Svore and J. Gao. Ranking, Boosting, and Model Adaptation. Microsoft Research Technical Report MSR-TR-2008-109.

Cited By

View all
  • (2024)Improving searcher struggle detection via the reversal theoryDiscover Computing10.1007/s10791-024-09492-z27:1Online publication date: 19-Dec-2024
  • (2023)The impact of inconsistent human annotations on AI driven clinical decision makingnpj Digital Medicine10.1038/s41746-023-00773-36:1Online publication date: 21-Feb-2023
  • (2017)User behavior modeling for better Web search rankingFrontiers of Computer Science: Selected Publications from Chinese Universities10.1007/s11704-017-6518-611:6(923-936)Online publication date: 1-Dec-2017
  • Show More Cited By

Index Terms

  1. Collecting high quality overlapping labels at low cost

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SIGIR '10: Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
    July 2010
    944 pages
    ISBN:9781450301534
    DOI:10.1145/1835449
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 19 July 2010

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. learning to rank
    2. overlapping labels
    3. relevance labels

    Qualifiers

    • Research-article

    Conference

    SIGIR '10
    Sponsor:

    Acceptance Rates

    SIGIR '10 Paper Acceptance Rate 87 of 520 submissions, 17%;
    Overall Acceptance Rate 792 of 3,983 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)3
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 15 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Improving searcher struggle detection via the reversal theoryDiscover Computing10.1007/s10791-024-09492-z27:1Online publication date: 19-Dec-2024
    • (2023)The impact of inconsistent human annotations on AI driven clinical decision makingnpj Digital Medicine10.1038/s41746-023-00773-36:1Online publication date: 21-Feb-2023
    • (2017)User behavior modeling for better Web search rankingFrontiers of Computer Science: Selected Publications from Chinese Universities10.1007/s11704-017-6518-611:6(923-936)Online publication date: 1-Dec-2017
    • (2017)A crowdsourced “Who wants to be a millionaire?” playerConcurrency and Computation: Practice and Experience10.1002/cpe.416833:8Online publication date: 15-May-2017
    • (2016)Time-Aware Click ModelACM Transactions on Information Systems10.1145/298823035:3(1-24)Online publication date: 15-Dec-2016
    • (2016)Individual Judgments Versus ConsensusACM Transactions on the Web10.1145/283412210:1(1-21)Online publication date: 9-Jan-2016
    • (2016)Combining crowd consensus and user trustworthiness for managing collective tasksFuture Generation Computer Systems10.1016/j.future.2015.04.01454:C(378-388)Online publication date: 1-Jan-2016
    • (2015)Incorporating Non-sequential Behavior into Click ModelsProceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/2766462.2767712(283-292)Online publication date: 9-Aug-2015
    • (2015)Which noise affects algorithm robustness for learning to rankInformation Retrieval Journal10.1007/s10791-015-9253-318:3(215-245)Online publication date: 28-Apr-2015
    • (2014)Aggregation of Crowdsourced Labels Based on Worker HistoryProceedings of the 4th International Conference on Web Intelligence, Mining and Semantics (WIMS14)10.1145/2611040.2611074(1-11)Online publication date: 2-Jun-2014
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media