skip to main content
10.1145/3298689.3347031acmotherconferencesArticle/Chapter ViewAbstractPublication PagesrecsysConference Proceedingsconference-collections
research-article
Public Access

Adversarial attacks on an oblivious recommender

Published:10 September 2019Publication History

ABSTRACT

Can machine learning models be easily fooled? Despite the recent surge of interest in learned adversarial attacks in other domains, in the context of recommendation systems this question has mainly been answered using hand-engineered fake user profiles. This paper attempts to reduce this gap. We provide a formulation for learning to attack a recommender as a repeated general-sum game between two players, i.e., an adversary and a recommender oblivious to the adversary's existence. We consider the challenging case of poisoning attacks, which focus on the training phase of the recommender model. We generate adversarial user profiles targeting subsets of users or items, or generally the top-K recommendation quality. Moreover, we ensure that the adversarial user profiles remain unnoticeable by preserving proximity of the real user rating/interaction distribution to the adversarial fake user distribution. To cope with the challenge of the adversary not having access to the gradient of the recommender's objective with respect to the fake user profiles, we provide a non-trivial algorithm building upon zero-order optimization techniques. We offer a wide range of experiments, instantiating the proposed method for the case of the classic popular approach of a low-rank recommender, and illustrating the extent of the recommender's vulnerability to a variety of adversarial intents. These results can serve as a motivating point for more research into recommender defense strategies against machine learned attacks.

References

  1. Alekh Agarwal, Ofer Dekel, and Lin Xiao. 2010. Optimal Algorithms for Online Convex Optimization with Multi-Point Bandit Feedback. In COLT. Citeseer, 28--40.Google ScholarGoogle Scholar
  2. Charu C Aggarwal. 2016. Attack-resistant recommender systems. In Recommender Systems. Springer, 385--410. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Shalabh Bhatnagar, HL Prasad, and LA Prashanth. 2012. Stochastic recursive algorithms for optimization: simultaneous perturbation methods. Vol. 434. Springer.Google ScholarGoogle Scholar
  4. Robin Burke, Michael P OMahony, and Neil J Hurley. 2015. Robust collaborative recommendation. In Recommender systems handbook. Springer, 961--995.Google ScholarGoogle Scholar
  5. Dong-Kyu Chae, Jin-Soo Kang, Sang-Wook Kim, and Jung-Tae Lee. 2018. CFGAN: A Generic Collaborative Filtering Framework based on Generative Adversarial Networks. In CIKM. ACM, 137--146. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. John C Duchi, Michael I Jordan, Martin J Wainwright, and Andre Wibisono. 2015. Optimal rates for zero-order convex optimization: The power of two function evaluations. IEEE Transactions on Information Theory 61, 5 (2015), 2788--2806.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Minghong Fang, Guolei Yang, Neil Zhenqiang Gong, and Jia Liu. 2018. Poisoning Attacks to Graph-Based Recommender Systems. In ACSAC. ACM, 381--392. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Ian Goodfellow, Patrick McDaniel, and Nicolas Papernot. 2018. Making machine learning robust against adversarial inputs. Commun. ACM 61, 7 (2018), 56--66. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Ian Goodfellow, Nicolas Papernot, Patrick McDaniel, R Feinman, F Faghri, A Matyasko, K Hambardzumyan, YL Juang, A Kurakin, R Sheatsley, et al. 2016. cleverhans v0. 1: an adversarial machine learning library. arXiv preprint (2016).Google ScholarGoogle Scholar
  10. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In NIPS. 2672--2680. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).Google ScholarGoogle Scholar
  12. Nikolaus Hansen and Andreas Ostermeier. 2001. Completely derandomized self-adaptation in evolution strategies. Evolutionary computation 9, 2 (2001), 159--195. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. F Maxwell Harper and Joseph A Konstan. 2016. The movielens datasets: History and context. TIIS 5, 4 (2016), 19. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Xiangnan He, Zhankui He, Xiaoyu Du, and Tat-Seng Chua. 2018. Adversarial personalized ranking for recommendation. In SIGIR. ACM, 355--364. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Wang-Cheng Kang, Chen Fang, Zhaowen Wang, and Julian McAuley. 2017. Visually-aware fashion recommendation and design with generative image models. In ICDM. IEEE, 207--216.Google ScholarGoogle Scholar
  16. Bo Li, Yining Wang, Aarti Singh, and Yevgeniy Vorobeychik. 2016. Data poisoning attacks on factorization-based collaborative filtering. In NIPS. 1885--1893. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Andriy Mnih and Ruslan R Salakhutdinov. 2008. Probabilistic matrix factorization. In NIPS. 1257--1264. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Bamshad Mobasher, Robin Burke, Runa Bhaumik, and Chad Williams. 2007. Toward trustworthy recommender systems: An analysis of attack models and algorithm robustness. TOIT 7, 4 (2007), 23. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2016. Universal adversarial perturbations. arXiv preprint arXiv:1610.08401 (2016).Google ScholarGoogle Scholar
  20. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. Deepfool: a simple and accurate method to fool deep neural networks. In CVPR. 2574--2582.Google ScholarGoogle Scholar
  21. Michael O'Mahony, Neil Hurley, Nicholas Kushmerick, and Guénolé Silvestre. 2004. Collaborative recommendation: A robustness analysis. TOIT 4, 4 (2004), 344--377. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Michael P OâĂŹMahony, Neil J Hurley, and Guenole CM Silvestre. 2002. Promoting recommendations: An attack on collaborative filtering. In DEXA. Springer, 494--503. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael Wellman. 2016. Towards the science of security and privacy in machine learning. arXiv preprint arXiv:1611.03814 (2016).Google ScholarGoogle Scholar
  24. Alec Radford, Luke Metz, and Soumith Chintala. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015).Google ScholarGoogle Scholar
  25. Jiawei Su, Danilo Vasconcellos Vargas, and Sakurai Kouichi. 2017. One pixel attack for fooling deep neural networks. arXiv preprint arXiv:1710.08864 (2017).Google ScholarGoogle Scholar
  26. Jun Wang, Lantao Yu, Weinan Zhang, Yu Gong, Yinghui Xu, Benyou Wang, Peng Zhang, and Dell Zhang. 2017. Irgan: A minimax game for unifying generative and discriminative information retrieval models. In SIGIR. ACM, 515--524. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. 2018. Adversarial attacks on neural networks for graph data. In KDD. ACM, 2847--2856. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Adversarial attacks on an oblivious recommender

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      RecSys '19: Proceedings of the 13th ACM Conference on Recommender Systems
      September 2019
      635 pages
      ISBN:9781450362436
      DOI:10.1145/3298689

      Copyright © 2019 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 10 September 2019

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      RecSys '19 Paper Acceptance Rate36of189submissions,19%Overall Acceptance Rate254of1,295submissions,20%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader