skip to main content
10.1145/3018661.3018665acmconferencesArticle/Chapter ViewAbstractPublication PageswsdmConference Proceedingsconference-collections
research-article
Public Access

Joint Deep Modeling of Users and Items Using Reviews for Recommendation

Published:02 February 2017Publication History

ABSTRACT

A large amount of information exists in reviews written by users. This source of information has been ignored by most of the current recommender systems while it can potentially alleviate the sparsity problem and improve the quality of recommendations. In this paper, we present a deep model to learn item properties and user behaviors jointly from review text. The proposed model, named Deep Cooperative Neural Networks (DeepCoNN), consists of two parallel neural networks coupled in the last layers. One of the networks focuses on learning user behaviors exploiting reviews written by the user, and the other one learns item properties from the reviews written for the item. A shared layer is introduced on the top to couple these two networks together. The shared layer enables latent factors learned for users and items to interact with each other in a manner similar to factorization machine techniques. Experimental results demonstrate that DeepCoNN significantly outperforms all baseline recommender systems on a variety of datasets.

References

  1. A. Almahairi, K. Kastner, K. Cho, and A. Courville. Learning distributed representations from reviews for collaborative filtering. In Proceedings of the 9th ACM Conference on Recommender Systems, pages 147--154. ACM, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. S. Baccianella, A. Esuli, and F. Sebastiani. Multi-facet rating of product reviews. In Advances in Information Retrieval, pages 461--472. Springer, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Y. Bao, H. Fang, and J. Zhang. Topicmf: Simultaneously exploiting ratings and reviews for recommendation. In AAAI, pages 2--8. AAAI Press, 2014.Google ScholarGoogle Scholar
  4. Y. Bengio, H. Schwenk, J.-S. Senécal, F. Morin, and J.-L. Gauvain. Neural probabilistic language models. In Innovations in Machine Learning, pages 137--186. Springer, 2006. Google ScholarGoogle Scholar
  5. D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. the Journal of machine Learning research, 3:993--1022, 2003.Google ScholarGoogle Scholar
  6. L. Chen, G. Chen, and F. Wang. Recommender systems based on user reviews: the state of the art. User Modeling and User-Adapted Interaction, 25(2):99--154, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493--2537, 2011.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Q. Diao, M. Qiu, C. Wu, A. J. Smola, J. Jiang, and C. Wang. Jointly modeling aspects, ratings and sentiments for movie recommendation (JMARS). In KDD, pages 193--202. ACM, 2014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. A. M. Elkahky, Y. Song, and X. He. A multi-view deep learning approach for cross domain user modeling in recommendation systems. In Proceedings of the 24th International Conference on World Wide Web, pages 278--288. International World Wide Web Conferences Steering Committee, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. N. Jakob, S. H. Weber, M. C. Müller, and I. Gurevych. Beyond the stars: exploiting free-text user reviews to improve the accuracy of movie recommendations. In Proceedings of the 1st international CIKM workshop on Topic-sentiment analysis for mass opinion, pages 57--64. ACM, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. R. Johnson and T. Zhang. Effective use of word order for text categorization with convolutional neural networks. In HLT-NAACL, pages 103--112. The Association for Computational Linguistics, 2015. Google ScholarGoogle ScholarCross RefCross Ref
  12. Y. Kim. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882, 2014.Google ScholarGoogle Scholar
  13. Y. Koren, R. Bell, and C. Volinsky. Matrix factorization techniques for recommender systems. Computer, (8):30--37, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097--1105, 2012.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. S. Li, J. Kawale, and Y. Fu. Deep collaborative filtering via marginalized denoising auto-encoder. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 811--820. ACM, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. G. Ling, M. R. Lyu, and I. King. Ratings meet reviews, a combined approach to recommend. In Proceedings of the 8th ACM Conference on Recommender systems, pages 105--112. ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. J. McAuley and J. Leskovec. Hidden factors and hidden topics: understanding rating dimensions with review text. In Proceedings of the 7th ACM conference on Recommender systems, pages 165--172. ACM, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. J. McAuley, J. Leskovec, and D. Jurafsky. Learning attitudes and attributes from multi-aspect reviews. In Data Mining (ICDM), 2012 IEEE 12th International Conference on, pages 1020--1025. IEEE, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. J. McAuley, R. Pandey, and J. Leskovec. Inferring networks of substitutable and complementary products. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 785--794. ACM, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. T. Mikolov, M. Karafiát, L. Burget, J. Cernockỳ, and S. Khudanpur. Recurrent neural network based language model. In INTERSPEECH, pages 1045--1048, 2010.Google ScholarGoogle ScholarCross RefCross Ref
  21. T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111--3119, 2013.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 807--814, 2010.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. R. Pan, Y. Zhou, B. Cao, N. N. Liu, R. Lukose, M. Scholz, and Q. Yang. One-class collaborative filtering. In Data Mining, 2008. ICDM'08. Eighth IEEE International Conference on, pages 502--511. IEEE, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. S. Rendle. Factorization machines with libfm. ACM Transactions on Intelligent Systems and Technology (TIST), 3(3):57, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In NIPS, pages 1257--1264. Curran Associates, Inc., 2007.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. R. Salakhutdinov, A. Mnih, and G. Hinton. Restricted boltzmann machines for collaborative filtering. In Proceedings of the 24th international conference on Machine learning, pages 791--798. ACM, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. A. I. Schein, A. Popescul, L. H. Ungar, and D. M. Pennock. Methods and metrics for cold-start recommendations. In Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, pages 253--260. ACM, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929--1958, 2014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016.Google ScholarGoogle Scholar
  30. T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4:2, 2012.Google ScholarGoogle Scholar
  31. A. Van den Oord, S. Dieleman, and B. Schrauwen. Deep content-based music recommendation. In Advances in Neural Information Processing Systems, pages 2643--2651, 2013.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. H. M. Wallach. Topic modeling: beyond bag-of-words. In Proceedings of the 23rd international conference on Machine learning, pages 977--984. ACM, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. C. Wang and D. M. Blei. Collaborative topic modeling for recommending scientific articles. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 448--456. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. H. Wang, Y. Lu, and C. Zhai. Latent aspect rating analysis on review text data: a rating regression approach. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 783--792. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. H. Wang, N. Wang, and D.-Y. Yeung. Collaborative deep learning for recommender systems. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1235--1244. ACM, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. X. Wang and Y. Wang. Improving content-based and hybrid music recommendation using deep learning. In Proceedings of the ACM International Conference on Multimedia, pages 627--636. ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Y. Wu, C. DuBois, A. X. Zheng, and M. Ester. Collaborative denoising auto-encoders for top-n recommender systems.Google ScholarGoogle Scholar
  38. Y. Wu and M. Ester. FLAME: A probabilistic model combining aspect based opinion mining and collaborative filtering. In WSDM, pages 199--208. ACM, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Joint Deep Modeling of Users and Items Using Reviews for Recommendation

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          WSDM '17: Proceedings of the Tenth ACM International Conference on Web Search and Data Mining
          February 2017
          868 pages
          ISBN:9781450346757
          DOI:10.1145/3018661

          Copyright © 2017 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 2 February 2017

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          WSDM '17 Paper Acceptance Rate80of505submissions,16%Overall Acceptance Rate498of2,863submissions,17%

          Upcoming Conference

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader