skip to main content
10.1145/3240323.3241622acmconferencesArticle/Chapter ViewAbstractPublication PagesrecsysConference Proceedingsconference-collections
tutorial

Mixed methods for evaluating user satisfaction

Published:27 September 2018Publication History

ABSTRACT

Evaluation is a fundamental part of a recommendation system. Evaluation typically takes one of three forms: (1) smaller lab studies with real users; (2) batch tests with offline collections, judgements, and measures; (3) large-scale controlled experiments (e.g. A/B tests) looking at implicit feedback. But it is rare for the first to inform and influence the latter two; in particular, implicit feedback metrics often have to be continuously revised and updated as assumptions are found to be poorly supported.

Mixed methods research enables practitioners to develop robust evaluation metrics by combining strengths of both qualitative and quantitative approaches. In this tutorial, we will show how qualitative research on user behavior provides insight on the relationship between implicit signals and satisfaction. These insights can inform and augment quantitative modeling and analysis for online and offline metrics and evaluation.

References

  1. Eugene Agichtein, Eric Brill, and Susan Dumais. Improving web search ranking by incorporating user behavior information. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 19--26, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Ben Carterette. The best published result is random: sequential testing and its effect on reported effectiveness. In Proceedings of the 38th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Ben Carterette. Statistical significance testing in information retrieval: theory and practice. In Proceedings of the 40th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 2017. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Benjamin A Carterette. Multiple testing in statistical analysis of systems-based information retrieval experiments. ACM Transactions on Information Systems (TOIS), 30(1):4, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Marco Creatura. What we saw at recsys 2017 conference, Sep 2017.Google ScholarGoogle Scholar
  6. John W Creswell, Ann Carroll Klassen, Vicki L Plano Clark, and Katherine Clegg Smith. Best practices for mixed methods research in the health sciences. Bethesda (Maryland): National Institutes of Health, 2013:541--545, 2011.Google ScholarGoogle Scholar
  7. John W Creswell, Vicki L Plano Clark, Michelle L Gutmann, and William E Hanson. An expanded typology for classifying mixed methods research info design. Handbook of Mixed Methods in Social and Behavioural Research, 2003.Google ScholarGoogle Scholar
  8. Steve Fox, Kuldeep Karnawat, Mark Mydland, Susan Dumais, and Thomas White. Evaluating implicit measures to improve web search. TOIS, 23(2), 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Asela Gunawardana and Guy Shani. Evaluating recommender systems. In Recommender Systems Handbook, pages 265--308. Springer, 2015.Google ScholarGoogle ScholarCross RefCross Ref
  10. Qi Guo, Haojian Jin, Dmitry Lagun, Shuai Yuan, and Eugene Agichtein. Mining touch interaction data on mobile devices to predict web search result relevance. In Proceedings of the 36th Annual International Conference on Research and Development in Information Retrieval, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Martyn Hammersley and Paul Atkinson. Ethnography: Principles in practice. Routledge, 2007.Google ScholarGoogle Scholar
  12. Thorsten Joachims, Laura Granka, Bing Pan, Helene Hembrooke, and Geri Gay. Accurately interpreting clickthrough data as implicit feedback. In ACM SIGIR Forum, volume 51, pages 4--11. ACM, 2017. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Richard A Johnson and Dean W Wichern. Multivariate analysis. Encyclopedia of Statistical Sciences, 8, 2004.Google ScholarGoogle Scholar
  14. Bart P Knijnenburg. Conducting user experiments in recommender systems. In Proceedings of the 6th ACM Conference on Recommender Systems, pages 3--4, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Jon A Krosnick. Survey research. Annual review of psychology, 50(1):537--567, 1999.Google ScholarGoogle Scholar
  16. Mike Kuniavsky. Observing the user experience: a practitioner's guide to user research. Elsevier, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Jonathan Lazar, Jinjuan Heidi Feng, and Harry Hochheiser. Research methods in human-computer interaction. Morgan Kaufmann, 2017.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Nils Brede Moe, Torgeir Dingsøyr, and Tore Dybå. A teamwork model for understanding an agile team: A case study of a scrum project. Information and Software Technology, 52(5):480--491, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Tetsuya Sakai. Conducting laboratory experiments properly with statistical tools: an easy hands-on tutorial. In Proceedings of the 41st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 2018. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Steven J Taylor, Robert Bogdan, and Marjorie DeVault. Introduction to qualitative research methods: A guidebook and resource. John Wiley & Sons, 2015.Google ScholarGoogle Scholar

Index Terms

  1. Mixed methods for evaluating user satisfaction

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          RecSys '18: Proceedings of the 12th ACM Conference on Recommender Systems
          September 2018
          600 pages
          ISBN:9781450359016
          DOI:10.1145/3240323

          Copyright © 2018 Owner/Author

          Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 27 September 2018

          Check for updates

          Qualifiers

          • tutorial

          Acceptance Rates

          RecSys '18 Paper Acceptance Rate32of181submissions,18%Overall Acceptance Rate254of1,295submissions,20%

          Upcoming Conference

          RecSys '24
          18th ACM Conference on Recommender Systems
          October 14 - 18, 2024
          Bari , Italy

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader