skip to main content
10.1145/2959100.2959150acmconferencesArticle/Chapter ViewAbstractPublication PagesrecsysConference Proceedingsconference-collections
research-article
Public Access

Gaze Prediction for Recommender Systems

Published:07 September 2016Publication History

ABSTRACT

As users browse a recommender system, they systematically consider or skip over much of the displayed content. It seems obvious that these eye gaze patterns contain a rich signal concerning these users' preferences. However, because eye tracking data is not available to most recommender systems, these signals are not widely incorporated into personalization models. In this work, we show that it is possible to predict gaze by combining easily-collected user browsing data with eye tracking data from a small number of users in a grid-based recommender interface. Our technique is able to leverage a small amount of eye tracking data to infer gaze patterns for other users. We evaluate our prediction models in MovieLens -- an online movie recommender system. Our results show that incorporating eye tracking data from a small number of users significantly boosts accuracy as compared with only using browsing data, even though the eye-tracked users are different from the testing users (e.g. AUC=0.823 vs. 0.693 in predicting whether a user will fixate on an item). We also demonstrate that Hidden Markov Models (HMMs) can be applied in this setting; they are better than linear models in predicting fixation probability and capturing the interface regularity through Bayesian inference (AUC=0.823 vs. 0.757).

Skip Supplemental Material Section

Supplemental Material

p131.mp4

mp4

1.2 GB

References

  1. E. Agichtein, E. Brill, and S. Dumais. Improving web search ranking by incorporating user behavior information. In SIGIR'06, pages 19--26. ACM, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. G. Buscher, E. Cutrell, and M. R. Morris. What do you see when you're surfing?: using eye tracking to predict salient regions of web pages. In CHI'09, pages 21--30. ACM, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. G. Buscher, L. van Elst, and A. Dengel. Segment-level display time as implicit feedback: a comparison to eye tracking. In SIGIR'09, pages 67--74. ACM, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. T. J. Buschman and E. K. Miller. Top-down versus bottom-up control of attention in the prefrontal and posterior parietal cortices. Science, 315(5820):1860--1862, 2007.Google ScholarGoogle ScholarCross RefCross Ref
  5. S. Castagnos, N. Jones, and P. Pu. Eye-tracking product recommenders' usage. In RecSys'10, pages 29--36. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. M. Cerf, J. Harel, W. Einhauser, and C. Koch. Predicting human gaze using low-level saliency combined with face detection. In NIPS'08, pages 241--248, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. O. Chapelle and Y. Zhang. A dynamic bayesian network click model for web search ranking. In WWW'09, pages 1--10. ACM, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Y. Chen and T. W. Yan. Position-normalized click prediction in search advertising. In KDD'12, pages 795--803. ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. N. Craswell, O. Zoeter, M. Taylor, and B. Ramsey. An experimental comparison of click position-bias models. In WSDM'08, pages 87--94. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the em algorithm. Journal of the royal statistical society. Series B (methodological), pages 1--38, 1977.Google ScholarGoogle Scholar
  11. S. Djamasbi, M. Siegel, and T. Tullis. Visual hierarchy and viewing behavior: An eye tracking study. In Human-Computer Interaction. Design and Development Approaches, pages 331--340. Springer, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. A. T. Duchowski, N. Cournia, and H. Murphy. Gaze-contingent displays: A review. CyberPsychology & Behavior, 7(6):621--634, 2004.Google ScholarGoogle ScholarCross RefCross Ref
  13. G. E. Dupret and B. Piwowarski. A user browsing model to predict search engine click data from past observations. In SIGIR'08, pages 331--338. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. S. R. Ellis and L. Stark. Statistical dependency in visual scanning. Human Factors: The Journal of the Human Factors and Ergonomics Society, 28(4):421--438, 1986. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. J. D. Ferguson. Variable duration models for speech. In Proc. Symposium on the Application of HMMs to Text and Speech, pages 143--179, 1980.Google ScholarGoogle Scholar
  16. J. M. Findlay. Active vision: The psychology of looking and seeing. 2014.Google ScholarGoogle Scholar
  17. M. G. Glaholt and E. M. Reingold. Eye movement monitoring as a process tracing methodology in decision making research. Journal of Neuroscience, Psychology, and Economics, 4(2):125, 2011.Google ScholarGoogle Scholar
  18. L. A. Granka, T. Joachims, and G. Gay. Eye-tracking analysis of user behavior in www search. In SIGIR'04, pages 478--479. ACM, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Q. Guo and E. Agichtein. Towards predicting web searcher gaze position from mouse movements. In CHI EA'10, pages 3601--3606. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. S. S. Hacisalihzade, L. W. Stark, and J. S. Allen. Visual perception and sequences of eye movement fixations: A stochastic modeling approach. Systems, Man and Cybernetics, IEEE Transactions on, 22(3):474--481, 1992.Google ScholarGoogle Scholar
  21. A. Haji-Abolhassani and J. J. Clark. A computational model for task inference in visual search. Journal of vision, 13(3):29--29, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  22. J. M. Henderson, S. V. Shinkareva, J. Wang, S. G. Luke, and J. Olejarczyk. Predicting cognitive state from eye movements. PloS one, 8(5):e64937, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  23. J. L. Herlocker, J. A. Konstan, L. G. Terveen, and J. T. Riedl. Evaluating collaborative filtering recommender systems. ACM TOIS, 22(1):5--53, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. K. Hofmann, A. Schuth, A. Bellogin, and M. De Rijke. Effects of position bias on click-based recommender evaluation. In ECIR'14, pages 624--630. Springer, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  25. Y. Hu, Y. Koren, and C. Volinsky. Collaborative filtering for implicit feedback datasets. In ICDM'08, pages 263--272. Ieee, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. J. Huang, R. White, and G. Buscher. User see, user point: gaze and cursor alignment in web search. In CHI'12, pages 1341--1350. ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. TPAMI'98, (11):1254--1259, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. T. Joachims, L. Granka, B. Pan, H. Hembrooke, and G. Gay. Accurately interpreting clickthrough data as implicit feedback. In SIGIR'05. Acm, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. T. Judd, K. Ehinger, F. Durand, and A. Torralba. Learning to predict where humans look. In ICCV'09, pages 2106--2113. IEEE, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  30. D. Marr, T. Poggio, E. C. Hildreth, and W. E. L. Grimson. A computational theory of human stereo vision. Springer, 1991.Google ScholarGoogle ScholarCross RefCross Ref
  31. S. M. McNee, J. Riedl, and J. A. Konstan. Being accurate is not enough: how accuracy metrics have hurt recommender systems. In CHI EA'06, pages 1097--1101. ACM, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. J. Mullahy. Specification and testing of some modified count data models. Journal of econometrics, 33(3):341--365, 1986.Google ScholarGoogle ScholarCross RefCross Ref
  33. R. Pan, Y. Zhou, B. Cao, N. N. Liu, R. Lukose, M. Scholz, and Q. Yang. One-class collaborative filtering. In ICDM'08, pages 502--511. IEEE, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. K. Puolam\"aki, J. Salojarvi, E. Savia, J. Simola, and S. Kaski. Combining eye movements and collaborative filtering for proactive information retrieval. In SIGIR'05, pages 146--153. ACM, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. L. R. Rabiner. A tutorial on hidden markov models and selected applications in speech recognition. Proc. IEEE, 77(2):257--286, 1989.Google ScholarGoogle ScholarCross RefCross Ref
  36. P. Resnick, N. Iacovou, M. Suchak, P. Bergstrom, and J. Riedl. Grouplens: an open architecture for collaborative filtering of netnews. In CSCW, 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. C. Roda. Human attention and its implications for human-computer interaction. Human Attention in Digital Environments, 1:11--62, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  38. B. Sarwar, G. Karypis, J. Konstan, and J. Riedl. Item-based collaborative filtering recommendation algorithms. In WWW'01, pages 285--295. ACM, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. R. Srikant, S. Basu, N. Wang, and D. Pregibon. User browsing models: relevance versus examination. In KDD'10, pages 223--232. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. B. W. Tatler. The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions. Journal of Vision, 7(14):4--4, 2007.Google ScholarGoogle ScholarCross RefCross Ref
  41. B. W. Tatler, M. M. Hayhoe, M. F. Land, and D. H. Ballard. Eye guidance in natural vision: Reinterpreting salience. Journal of vision, 11(5):5--5, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  42. B. W. Tatler and B. T. Vincent. The prominence of behavioural biases in eye guidance. Visual Cognition, 17(6--7):1029--1054, 2009.Google ScholarGoogle Scholar
  43. S. Xu, H. Jiang, and F. Lau. Personalized online document, image and video recommendation via commodity eye-tracking. In RecSys'08. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Gaze Prediction for Recommender Systems

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        RecSys '16: Proceedings of the 10th ACM Conference on Recommender Systems
        September 2016
        490 pages
        ISBN:9781450340359
        DOI:10.1145/2959100

        Copyright © 2016 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 7 September 2016

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        RecSys '16 Paper Acceptance Rate29of159submissions,18%Overall Acceptance Rate254of1,295submissions,20%

        Upcoming Conference

        RecSys '24
        18th ACM Conference on Recommender Systems
        October 14 - 18, 2024
        Bari , Italy

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader