skip to main content
10.1145/3123266.3123445acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

The Role of Visual Attention in Sentiment Prediction

Authors Info & Claims
Published:19 October 2017Publication History

ABSTRACT

Automated assessment of visual sentiment has many applications, such as monitoring social media and facilitating online advertising. In current research on automated visual sentiment assessment, images are mainly input and processed as a whole. However, human attention is biased, and a focal region with high acuity can disproportionately influence visual sentiment. To investigate how attention influences visual sentiment, we conducted experiments that reveal critical insights into human perception. We discover that negative sentiments are elicited by the focal region without a notable influence of contextual information, whereas positive sentiments are influenced by both focal and contextual information. Building on these insights, we create new deep convolutional neural networks for sentiment prediction that have additional channels devoted to encoding focal information. On two benchmark datasets, the proposed models demonstrate superior performance compared with the state-of-the-art methods. Extensive visualizations and statistical analyses indicate that the focal channels are more effective on images with focal objects, especially for images that also elicit negative sentiments.

References

  1. Charu C Aggarwal and ChengXiang Zhai Mining text data Springer Science & Business Media, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Xavier Alameda-Pineda, Elisa Ricci, Yan Yan, and Nicu Sebe Recognizing emotions from abstract paintings using non-linear matrix completion CVPR, 2016.Google ScholarGoogle Scholar
  3. Joel Aronoff How we recognize angry and happy emotion in people, places, and things Cross-cultural research, 2006.Google ScholarGoogle Scholar
  4. Rosemary A Bailey Design of comparative experiments. In Cambridge University Press, 2008, Vol. Vol. 25.Google ScholarGoogle ScholarCross RefCross Ref
  5. Damian Borth, Rongrong Ji, Tao Chen, Thomas Breuel, and Shih-Fu Chang Large-scale visual sentiment ontology and detectors using adjective noun pairs ACM MM, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Tobias Brosch, Gilles Pourtois, and David Sander The perception and categorisation of emotional stimuli: A review Cognition and Emotion, 2010.Google ScholarGoogle Scholar
  7. Neil DB Bruce and John K Tsotsos Saliency, attention, and visual search: An information theoretic approach Journal of vision, 2009.Google ScholarGoogle Scholar
  8. Zoya Bylinskii, Tilke Judd, Ali Borji, Laurent Itti, Frédo Durand, Aude Oliva, and Antonio Torralba MIT Saliency Benchmark. MIT, 2017.Google ScholarGoogle Scholar
  9. Zoya Bylinskii, Adrià Recasens, Ali Borji, Aude Oliva, Antonio Torralba, and Frédo Durand Where Should Saliency Models Look Next?. In ECCV, 2016. Springer.Google ScholarGoogle Scholar
  10. Victor Campos, Brendan Jou, and Xavier Giro-i Nieto From pixels to sentiment: Fine-tuning cnns for visual sentiment prediction Image and Vision Computing, 2017. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Moran Cerf, E Paxon Frady, and Christof Koch Faces and text attract gaze independent of the task: Experimental data and computer model Journal of vision, 2009.Google ScholarGoogle Scholar
  12. Tao Chen, Damian Borth, Trevor Darrell, and Shih-Fu Chang Deepsentibank: Visual sentiment concept classification with deep convolutional neural networks. In arXiv preprint arXiv:1410.8586, 2014.Google ScholarGoogle Scholar
  13. Tao Chen, Felix X Yu, Jiawei Chen, Yin Cui, Yan-Ying Chen, and Shih-Fu Chang Object-based visual sentiment concept analysis and application ACM MM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Wolfgang Einh"auser, Merrielle Spain, and Pietro Perona Objects predict fixations better than early saliency Journal of Vision, 2008.Google ScholarGoogle Scholar
  15. Paul Ekman An argument for basic emotions. In Cognition & emotion, 1992.Google ScholarGoogle Scholar
  16. Shaojing Fan, Tian-Tsong Ng, Jonathan S Herberg, Bryan L Koenig, Cheston Y-C Tan, and Rangding Wang An automated estimator of image visual realism based on human cognition CVPR, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Shaojing Fan, Tian-Tsong Ng, Bryan L Koenig, Ming Jiang, and Qi Zhao A paradigm for building generalized models of human image perception through data fusion CVPR, 2016.Google ScholarGoogle Scholar
  18. Anastasia Giachanou and Fabio Crestani Like it or not: A survey of twitter sentiment analysis methods ACM Computing Surveys (CSUR), 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Carlos FA Gomes, Charles J Brainerd, and Lilian M Stein Effects of emotional valence and arousal on recollective and nonrecollective recall. Journal of Experimental Psychology: Learning, Memory, and Cognition, 2013.Google ScholarGoogle Scholar
  20. Michael Gygli, Helmut Grabner, Hayko Riemenschneider, Fabian Nater, and Luc Van Gool The interestingness of images. In ICCV, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Xun Huang, Chengyao Shen, Xavier Boix, and Qi Zhao SALICON: Reducing the semantic gap in saliency prediction by adapting deep neural networks ICCV, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Laurent Itti and Pierre F Baldi Bayesian surprise attracts human attention. In NIPS, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Brendan Jou, Subhabrata Bhattacharya, and Shih-Fu Chang Predicting viewer perceived emotions in animated GIFs ACM MM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Elizabeth A Kensinger Remembering the details: Effects of emotion. In Emotion review, 2009.Google ScholarGoogle Scholar
  25. Ita GG Kreft, Ita Kreft, and Jan de Leeuw Introducing multilevel modeling. In Sage Publication,1998.Google ScholarGoogle Scholar
  26. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton Imagenet classification with deep convolutional neural networks NIPS, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Svetlana Lazebnik, Cordelia Schmid, and Jean Ponce Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories CVPR, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Yin Li, Xiaodi Hou, Christof Koch, James M Rehg, and Alan L Yuille The secrets of salient object segmentation. In CVPR, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Tie Liu, Zejian Yuan, Jian Sun, Jingdong Wang, Nanning Zheng, Xiaoou Tang, and Heung-Yeung Shum Learning to detect a salient object. In IEEE Transactions on Pattern analysis and machine intelligence, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Jana Machajdik and Allan Hanbury Affective image classification using features inspired by psychology and art theory ACM MM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Alexander Mathews, Lexing Xie, and Xuming He SentiCap: generating image descriptions with sentiments arXiv preprint arXiv:1510.01431, 2015.Google ScholarGoogle Scholar
  32. Joseph A Mikels, Barbara L Fredrickson, Gregory R Larkin, Casey M Lindberg, Sam J Maglio, and Patricia A Reuter-Lorenz Emotional category data on images from the International Affective Picture System Behavior research methods, 2005.Google ScholarGoogle Scholar
  33. Tirin Moore and Marc Zirnsak Neural mechanisms of selective visual attention. Annual Review of Psychology, 2015.Google ScholarGoogle Scholar
  34. Ken Nakayama, Julian S Joseph, and R Parasuraman Attention, pattern recognition and popout in visual search The attentive brain, 1998.Google ScholarGoogle Scholar
  35. Aude Oliva and Antonio Torralba Building the gist of a scene: The role of global image features in recognition Progress in brain research, 2006.Google ScholarGoogle Scholar
  36. Stephen E Palmer. 1999. Vision science: Photons to phenomenology. Vol. Vol. 1. MIT press Cambridge, MA.Google ScholarGoogle Scholar
  37. Gabriele Paolacci, Jesse Chandler, and Panagiotis Ipeirotis Running experiments on amazon mechanical turk. In Judgment and Decision Making, 2010.Google ScholarGoogle Scholar
  38. Jane E Raymond, Mark J Fenske, and Nader T Tavassoli Selective attention determines emotional responses to novel visual stimuli Psychological science, 2013.Google ScholarGoogle Scholar
  39. Ulrike Rimmele, Lila Davachi, Radoslav Petrov, Sonya Dougal, and Elizabeth A Phelps Emotion enhances the subjective feeling of remembering, despite lower accuracy for contextual details.. In Emotion, 2011.Google ScholarGoogle Scholar
  40. Michael Rubinstein, Diego Gutierrez, Olga Sorkine, and Ariel Shamir A comparative study of image retargeting. In ACM transactions on graphics, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, and others Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Bryan C Russell, Antonio Torralba, Kevin P Murphy, and William T Freeman LabelMe: a database and web-based tool for image annotation International journal of computer vision, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Harald T Schupp, Jessica Stockburger, Maurizio Codispoti, Markus Junghöfer, Almut I Weike, and Alfons O Hamm Selective visual attention to emotion. In Journal of neuroscience, 2007.Google ScholarGoogle Scholar
  44. Karen Simonyan and Andrew Zisserman Very deep convolutional networks for large-scale image recognition arXiv preprint arXiv:1409.1556, 2014.Google ScholarGoogle Scholar
  45. Nathan Sprague and Dana Ballard Eye movements for reward maximization. In NIPS, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Patrik Vuilleumier How brains beware: neural mechanisms of emotional attention Trends in cognitive sciences, 2005.Google ScholarGoogle Scholar
  47. Patrik Vuilleumier, Jorge L Armony, Jon Driver, and Raymond J Dolan Effects of attention and emotion on face processing in the human brain: an event-related fMRI study Neuron, 2001.Google ScholarGoogle Scholar
  48. Adrian Wells and Gerald Matthews. 2014. Attention and Emotion (Classic Edition): A Clinical Perspective. Psychology Press.Google ScholarGoogle Scholar
  49. Juan Xu, Ming Jiang, Shuo Wang, Mohan S Kankanhalli, and Qi Zhao Predicting human gaze beyond pixels. In Journal of vision, 2014.Google ScholarGoogle Scholar
  50. Quanzeng You, Liangliang Cao, Hailin Jin, and Jiebo Luo Robust Visual-Textual Sentiment Analysis: When Attention meets Tree-structured Recursive Neural Networks. In ACM MM, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Quanzeng You, Jiebo Luo, Hailin Jin, and Jianchao Yang Robust image sentiment analysis using progressively trained and domain transferred deep networks AAAI, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. The Role of Visual Attention in Sentiment Prediction

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          MM '17: Proceedings of the 25th ACM international conference on Multimedia
          October 2017
          2028 pages
          ISBN:9781450349062
          DOI:10.1145/3123266

          Copyright © 2017 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 19 October 2017

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          MM '17 Paper Acceptance Rate189of684submissions,28%Overall Acceptance Rate995of4,171submissions,24%

          Upcoming Conference

          MM '24
          MM '24: The 32nd ACM International Conference on Multimedia
          October 28 - November 1, 2024
          Melbourne , VIC , Australia

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader