skip to main content
10.1145/1148170.1148249acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
Article

Elicitation of term relevance feedback: an investigation of term source and context

Published: 06 August 2006 Publication History

Abstract

Term relevance feedback has had a long history in information retrieval. However, research on interactive term relevance feedback has yielded mixed results. In this paper, we investigate several aspects related to the elicitation of term relevance feedback: the display of document surrogates, the technique for identifying or selecting terms, and sources of expansion terms. We conduct a between subjects experiment (n=61) of three term relevance feedback interfaces using the 2005 TREC HARD collection, and evaluate each interface with respect to query length and retrieval performance. Results demonstrate that queries created with each experimental interface significantly outperformed corresponding baseline queries, even though there were no differences in performance between interface conditions. Results also demonstrate that pseudo-relevance feedback runs outperformed both baseline and experimental runs as assessed by recall-oriented measures, but that user-generated terms improved precision.

References

[1]
Allan, J. (2006). HARD Track overview in TREC 2005 high accuracy retrieval from documents. In E. M. Voorhees & L. P. Buckland (Eds.), TREC-2005, Proceedings of the Fourteenth Text Retrieval Conference. Washington, D.C.: Government Printing Office.
[2]
Anick, P. (2003). Using terminological feedback for web search refinement: A log based study. In Proceedings of the 26th Annual ACM International Conference on Research and Development in Information Retrieval (SIGIR '03), Toronto, CA, 88--95.
[3]
Beaulieu, M. (1997). Experiments on interfaces to support query expansion. Journal of Documentation, 53(1), 8--19.
[4]
Belkin, N. J. (1993). Interaction with texts: Information retrieval as information-seeking behavior. In Information Retrieval '93, Germany, 55--66.
[5]
Belkin, N. J., Cool, C., Kelly, D., Lin, S. J., Park, S. Y., Perez-Carballo, J., & Sikora, C. (2001). Iterative exploration, design and evaluation of support for query reformulation in interactive information retrieval. Information Processing & Management, 37(3), 404--434.
[6]
Belkin, N. J., Cool, C., Kelly, D., Lee, H.-J., Muresan, G., Tang, M.-C., & Yuan, X.-J. (2003). Query length in interactive information retrieval. In Proceedings of the 26th Annual ACM International Conference on Research and Development in Information Retrieval (SIGIR '03), Canada, 205--212.
[7]
Croft, W. B. & Das, R. (1990). Experiments with query acquisition and use in document retrieval systems. In Proceedings of the 13th Annual ACM International Conference on Research and Development in Information Retrieval (SIGIR '90), Brussels, 349--368.
[8]
Efthimiadis, E. N. (2000). Interactive query expansion: A user-based evaluation in a relevance feedback environment. Journal of the American Society for Information Science & Technology, 51(11), 989--1003.
[9]
Efthimiadis, E. N. (1996). Query expansion. Annual Review of Information Science & Technology, 31.
[10]
Harman, D. (1988). Towards interactive query expansion. In Proceedings of the 11th Annual ACM International Conference on Research and Development in Information Retrieval (SIGIR '88), Grenoble, 321--333.
[11]
Joho, H., Coverson, C., Sanderson, M., & Beaulieu, M. (2002). Hierarchical presentation of expansion terms. In Proceedings of the 17th Annual ACM Symposium on Applied Computing (SAC '02), Madrid, Spain, 645--649.
[12]
Kelly, D., Dollu, V. D. & Fu, X. (2005). The loquacious user: A document-independent source of terms for query expansion. In Proceedings of the 28th Annual ACM International Conference on Research and Development in Information Retrieval (SIGIR '05), Brazil, 457--464.
[13]
Kelly, D. & Fu, X. (2006). University of North Carolina's HARD Track Experiments at TREC 2005. In E. M. Voorhees & L. P. Buckland (Eds.), TREC-2006, Proceedings of the Fourteenth Text Retrieval Conference. Washington, D.C.: Government Printing Office.
[14]
Koenemann, J., & Belkin, N. J. (1996). A case for interaction: A study of interactive information retrieval behavior and effectiveness. In Proceedings of the SIGCHI Conference (CHI'96), Canada, 205--212.
[15]
Larson, R. R. (2001). TREC interactive with Cheshire II. Information Processing & Management, 37(3), 485--505.
[16]
Magennis, M. & van Rijsbergen, C. J. (1997). The potential and actual effectiveness of interactive query expansion. In Proceedings of the 20th Annual ACM International Conference on Research and Development in Information Retrieval (SIGIR '97), Philadelphia PA, USA, 324--332.
[17]
Nameth, Y., Shapira, B., & Taeib-Maimon, M. (2004). Evaluation of the real and perceived value of automatic and interactive query expansion. In Proceedings of the 27th Annual ACM International Conference on Research and Development in Information Retrieval (SIGIR '04), Sheffield, UK, 526--527.
[18]
Pennanen, M., & Vakkari, P. (2003). Students' conceptual structure, search process, and outcome while preparing a research proposal: A longitudinal case study. Journal of the American Society for Information Science & Technology, 54(8), 759--770.
[19]
Robertson, S. E., Walker, S., Jones, S., Hancock-Beaulieu, M. M., Gatford, M. (1995). Okapi at TREC-3. In D. Harman (Ed.), TREC-3, Proceedings of the Third Text Retrieval Conference. Washington, D.C.: Government Printing Office.
[20]
Ruthven, I. (2003). Re-examining the potential effectiveness of interactive query expansion. In Proceedings of the 26th Annual ACM International Conference on Research and Development in Information Retrieval (SIGIR '03), Toronto, CA, 213--220.
[21]
Spink, A. (1994). Term relevance feedback and query expansion: Relation to design. In Proceedings of the 17th Annual ACM International Conference on Research and Development in Information Retrieval (SIGIR '94), Dublin, Ireland, 81--90.
[22]
Spink, A., & Jansen, B. J. (2004). Web search: Public searching of the Web. Kluwer Academic Publishers.
[23]
Voorhees, E. M. (2006). Overview of TREC 2006. In E. M. Voorhees & L. P. Buckland (Eds.), TREC-2006, Proceedings of the Fourteenth Text Retrieval Conference. Washington, D.C.: Government Printing Office.
[24]
Wu, M., Fuller, M., & Wilkinson, R. (2001). Using clustering and classification approaches in interactive retrieval. Information Processing & Management, 37(3), 459--484.
[25]
Yang, K., Maglaughlin, K. L., & Newby, G. B. (2001). Passage feedback with IRIS. Information Processing & Management, 37(3), 521--541.

Cited By

View all
  • (2023)Entity-Based Relevance Feedback for Document RetrievalProceedings of the 2023 ACM SIGIR International Conference on Theory of Information Retrieval10.1145/3578337.3605128(177-187)Online publication date: 9-Aug-2023
  • (2022)Competitive SearchProceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3477495.3532771(2838-2849)Online publication date: 6-Jul-2022
  • (2021)Exploratory Search of GANs with Contextual BanditsProceedings of the 30th ACM International Conference on Information & Knowledge Management10.1145/3459637.3482103(3157-3161)Online publication date: 26-Oct-2021
  • Show More Cited By

Index Terms

  1. Elicitation of term relevance feedback: an investigation of term source and context

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      SIGIR '06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
      August 2006
      768 pages
      ISBN:1595933697
      DOI:10.1145/1148170
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 06 August 2006

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. elicitation of feedback
      2. familiarity
      3. query expansion
      4. query length
      5. relevance feedback interfaces
      6. term context
      7. user feedback

      Qualifiers

      • Article

      Conference

      SIGIR06
      Sponsor:
      SIGIR06: The 29th Annual International SIGIR Conference
      August 6 - 11, 2006
      Washington, Seattle, USA

      Acceptance Rates

      Overall Acceptance Rate 792 of 3,983 submissions, 20%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)7
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 18 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2023)Entity-Based Relevance Feedback for Document RetrievalProceedings of the 2023 ACM SIGIR International Conference on Theory of Information Retrieval10.1145/3578337.3605128(177-187)Online publication date: 9-Aug-2023
      • (2022)Competitive SearchProceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3477495.3532771(2838-2849)Online publication date: 6-Jul-2022
      • (2021)Exploratory Search of GANs with Contextual BanditsProceedings of the 30th ACM International Conference on Information & Knowledge Management10.1145/3459637.3482103(3157-3161)Online publication date: 26-Oct-2021
      • (2020)How Contextual Data Influences User Experience with Scholarly Recommender Systems: An Empirical FrameworkHCI International 2020 - Late Breaking Papers: User Experience Design and Case Studies10.1007/978-3-030-60114-0_42(635-661)Online publication date: 3-Oct-2020
      • (2019)Interactive IR User Study Design, Evaluation, and ReportingSynthesis Lectures on Information Concepts, Retrieval, and Services10.2200/S00923ED1V01Y201905ICR06711:2(i-75)Online publication date: 3-Jun-2019
      • (2019)How Relevance Feedback is Framed Affects User Experience, but not BehaviourProceedings of the 2019 Conference on Human Information Interaction and Retrieval10.1145/3295750.3298957(307-311)Online publication date: 8-Mar-2019
      • (2019)An evolving museum metaphor applied to cultural heritage for personalized content deliveryUser Modeling and User-Adapted Interaction10.1007/s11257-019-09222-x29:1(161-200)Online publication date: 1-Mar-2019
      • (2019)Integrating neurophysiologic relevance feedback in intent modeling for information retrievalJournal of the Association for Information Science and Technology10.1002/asi.2416170:9(917-930)Online publication date: 2-Aug-2019
      • (2018)Interactive Intent Modeling for Exploratory SearchACM Transactions on Information Systems10.1145/323159336:4(1-46)Online publication date: 3-Oct-2018
      • (2018)Term Relevance Feedback for Contextual Named Entity RetrievalProceedings of the 2018 Conference on Human Information Interaction & Retrieval10.1145/3176349.3176886(301-304)Online publication date: 1-Mar-2018
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media