skip to main content
10.1145/1864708.1864759acmconferencesArticle/Chapter ViewAbstractPublication PagesrecsysConference Proceedingsconference-collections
poster

Do clicks measure recommendation relevancy?: an empirical user study

Published:26 September 2010Publication History

ABSTRACT

Evaluation have been an important subject since the early days of recommender systems. In online test, the click-through rate (CTR) is often adopted as the metric. However, recommended items with higher CTR does not imply higher relevance of two items since factors like item popularity or item serendipity may influence user's click behavior. We argue that the relevance of recommendation system is also desirable in many real applications. Here relevant means relevance in a human perceptible way. Relevant recommendations not only increase the users' trust to the system but are extremely useful for the vast number of anonymous user as their recommendations may only be made based on the current item. In this paper, we empirically examine the relation between the relevance of recommendations and the corresponding CTR with a few representative ItemCF algorithms through online data from a TV show/movie website, Hulu. Experiments show that algorithms with higher overall CTR may not correspond to higher relevance. Thus CTR may not be the optimal metric for online evaluation of recommender systems if producing relevant recommendations is of importance.

References

  1. }}R. M. Bell and Y. Koren. Lessons from the netflix prize challenge. SIGKDD Explor. Newsl., 9(2):75--79, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. }}R. L. Cilibrasi and P. M. B. Vitanyi. The google similarity distance. IEEE Trans. on Knowl. and Data Eng., 19(3):370--383, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. }}D. Cosley, S. K. Lam, I. Albert, J. A. Konstan, and J. Riedl. Is seeing believing?: how recommender system interfaces affect users' opinions. In CHI '03, pages 585--592, New York, NY, USA, 2003. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. }}M. Deshpande and G. Karypis. Item-based top-n recommendation algorithms. ACM Trans. Inf. Syst., 22(1):143--177, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. }}J. Herlocker, J. Konstan, L. Terveen, and J. Riedl. Evaluating Collaborative Filtering Recommender Systems. ACM Transactions on Information Systems, 22:5--53, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. }}J. Konstan, B. Miller, D. Maltz, J. Herlocker, L. Gordon, and J. Riedl. GroupLens: Applying Collaborative Filtering to Usenet News. Communications of the ACM, 40:77--87, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. }}G. Linden, B. Smith, and J. York. Amazon.com recommendations: Item-to-item collaborative filtering. IEEE Internet Computing, 7(1):76--80, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. }}J. Liu, P. Dolan, and E. R. Pedersen. Personalized news recommendation based on click behavior. In IUI, pages 31--40, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. }}S. McNee, J. Riedl, and J. Konstan. Being Accurate is Not Enough: How Accuracy Metrics Have Hurt Recommender Systems. In Extended Abstracts CHI'06, Montreal, Canada, April 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. }}M. Pazzani and D. Billsus. Content-Based Recommendation Systems. The Adaptive Web, 4321:325--341, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. }}M. Weimer, A. Karatzoglou, Q. V. Le, and A. J. Smola. Cofi rank - maximum margin matrix factorization for collaborative ranking. In NIPS, 2007.Google ScholarGoogle Scholar
  12. }}M. Zhang and N. Hurley. Avoiding monotony: improving the diversity of recommendation lists. In RecSys '08: Proceedings of the 2008 ACM conference on Recommender systems, pages 123--130, New York, NY, USA, 2008. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Do clicks measure recommendation relevancy?: an empirical user study

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        RecSys '10: Proceedings of the fourth ACM conference on Recommender systems
        September 2010
        402 pages
        ISBN:9781605589060
        DOI:10.1145/1864708

        Copyright © 2010 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 26 September 2010

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • poster

        Acceptance Rates

        Overall Acceptance Rate254of1,295submissions,20%

        Upcoming Conference

        RecSys '24
        18th ACM Conference on Recommender Systems
        October 14 - 18, 2024
        Bari , Italy

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader