skip to main content
10.1145/3359992.3366638acmconferencesArticle/Chapter ViewAbstractPublication PagescommConference Proceedingsconference-collections
research-article

Walling up Backdoors in Intrusion Detection Systems

Published:09 December 2019Publication History

ABSTRACT

Interest in poisoning attacks and backdoors recently resurfaced for Deep Learning (DL) applications. Several successful defense mechanisms have been recently proposed for Convolutional Neural Networks (CNNs), for example in the context of autonomous driving. We show that visualization approaches can aid in identifying a backdoor independent of the used classifier. Surprisingly, we find that common defense mechanisms fail utterly to remove backdoors in DL for Intrusion Detection Systems (IDSs). Finally, we devise pruning-based approaches to remove backdoors for Decision Trees (DTs) and Random Forests (RFs) and demonstrate their effectiveness for two different network security datasets.

References

  1. D. W. Apley and J. Zhu. 2016. Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models. arXiv:1612.08468 [stat] (Dec. 2016). arXiv: 1612.08468.Google ScholarGoogle Scholar
  2. B. Biggio, I. Corona, G. Fumera, G. Giacinto, and F. Roli. 2011. Bagging classifiers for fighting poisoning attacks in adversarial environments. In 10th Int'l Workshop on Multiple Classifier Systems, volume 6713 of LNCS. Springer, Naples, Italy, 350--359.Google ScholarGoogle Scholar
  3. H. Chen, H. Zhang, D. Boning, and C.-J. Hsieh. 2019. Robust Decision Trees Against Adversarial Examples. In Proceedings of the 36th International Conference on Machine Learning. PMLR, Long Beach, CA, 1122--1131.Google ScholarGoogle Scholar
  4. F. Erlacher and F. Dressler. 2018. How to Test an IDS?: GENESIDS: An Automated System for Generating Attack Traffic. In Proceedings of the 2018 Workshop on Traffic Measurements for Cybersecurity (WTMC '18). ACM, New York, NY, USA, 46--51. https://doi.org/10.1145/3229598.3229601 event-place: Budapest, Hungary.Google ScholarGoogle Scholar
  5. F. Esposito, D. Malerba, G. Semeraro, and J. Kay. 1997. A comparative analysis of methods for pruning decision trees. IEEE Transactions on Pattern Analysis and Machine Intelligence 19, 5 (May 1997), 476--491.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. J. H. Friedman. 2001. Greedy Function Approximation: A Gradient Boosting Machine. The Annals of Statistics 29, 5 (2001), 1189--1232.Google ScholarGoogle ScholarCross RefCross Ref
  7. T. Gu, B. Dolan-Gavitt, and S. Garg. 2017. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain. arXiv:1708.06733 [cs] (Aug. 2017). arXiv: 1708.06733.Google ScholarGoogle Scholar
  8. K. Liu, B. Dolan-Gavitt, and S. Garg. 2018. Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks. arXiv:1805.12185 [cs] (May 2018). arXiv: 1805.12185.Google ScholarGoogle Scholar
  9. F. Meghdouri, T. Zseby, and F. Iglesias. 2018. Analysis of Lightweight Feature Vectors for Attack Detection in Network Traffic. Applied Sciences 8, 11 (Nov. 2018), 2196.Google ScholarGoogle ScholarCross RefCross Ref
  10. N. Moustafa and J. Slay. 2015. UNSW-NB15: a comprehensive data set for network intrusion detection systems (UNSW-NB15 network data set). In 2015 Military Communications and Information Systems Conference (MilCIS). 1--6.Google ScholarGoogle Scholar
  11. A. Paszke, S. Gross, S. Chintala, et al. 2017. Automatic differentiation in PyTorch. (2017), 4.Google ScholarGoogle Scholar
  12. F. Pedregosa, G. Varoquaux, A. Gramfort, et al. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 12 (Oct. 2011), 2825--2830.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. P. Russu, A. Demontis, B. Biggio, G. Fumera, and F. Roli. 2016. Secure Kernel Machines against Evasion Attacks. In Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security - ALSec '16. ACM Press, Vienna, Austria, 59--69.Google ScholarGoogle Scholar
  14. I. Sharafaldin, A. Habibi Lashkari, and A. A. Ghorbani. 2018. Toward Generating a New Intrusion Detection Dataset and Intrusion Traffic Characterization. In Proceedings of the 4th International Conference on Information Systems Security and Privacy. SCITEPRESS, Funchal, Madeira, Portugal, 108--116.Google ScholarGoogle Scholar
  15. J. Sietsma. 1988. Neural net pruning-why and how. In Proceedings of the International Conference on Neural Networks. IEEE, San Diego, CA, 325--333.Google ScholarGoogle ScholarCross RefCross Ref
  16. G. Vormayr. 2019. go-flows. https://github.com/CN-TU/go-flows Commit 0816e6.Google ScholarGoogle Scholar
  17. Wikipedia. 2019. Convolutional neural network. https://en.wikipedia.org/w/index.php?title=Convolutional_neural_network&oldid=921208341 Page Version ID: 921208341.Google ScholarGoogle Scholar
  18. N. Williams, S. Zander, and G. Armitage. 2006. A Preliminary Performance Comparison of Five Machine Learning Algorithms for Practical IP Traffic Flow Classification. SIGCOMM Comput. Commun. Rev. 36, 5 (Oct. 2006), 5--16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. 2014. How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems 27. MIT Press, 3320--3328.Google ScholarGoogle Scholar

Index Terms

  1. Walling up Backdoors in Intrusion Detection Systems

          Recommendations

          Reviews

          Shyamkumar T Iyer

          Intrusion detection systems (IDSs) detect anomalies in a network. However, backdoors instated for profit, political gain, and/or other reasons can be hard to detect and may introduce security vulnerabilities. IDSs are generally built using multilayer perceptrons (MLPs). Much research is needed to detect backdoors in such a system. The authors' research is based on two methods: (1) An analysis of backdoors using visual tools and techniques; and (2) An analysis of techniques like pruning and fine-tuning. The authors theorize that, while such methods have been discussed for convolutional neural networks (CNNs), similar research is inadequate in detecting backdoors in traditional MLPs, decision trees (DTs), and random forests (RFs). To design an experimental setup, the authors design an RF and MLP model with backdoors using PyTorch. They visualize the model using partial dependence plots (PDPs) and accumulated local effects (ALE) plots, and look for regions that suggest the behavior of the model under investigation (MuI) is unexplainable or counterintuitive. They also study the effect of techniques like pruning and fine-tuning on their model via design validation and test datasets. With their experimentation, the authors recommend PDP and ALE plots to analyze questionable decisions. They show ways to detect unnecessary features hinting toward a backdoor. The authors also show that pruning and fine-tuning techniques popular with CNNs are inefficient at removing backdoors from their experimental MLP models. However, their experiments show that DTs and RFs can benefit from a validation set. The results show that validation sets as a pruning mechanism can reduce backdoor efficacy without reducing classifier detection effectiveness. In summary, this paper provides good commentary on backdoors and highlights the need to devise new ways to detect them.

          Access critical reviews of Computing literature here

          Become a reviewer for Computing Reviews.

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            Big-DAMA '19: Proceedings of the 3rd ACM CoNEXT Workshop on Big DAta, Machine Learning and Artificial Intelligence for Data Communication Networks
            December 2019
            53 pages
            ISBN:9781450369992
            DOI:10.1145/3359992

            Copyright © 2019 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 9 December 2019

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Research
            • Refereed limited

            Acceptance Rates

            Big-DAMA '19 Paper Acceptance Rate7of11submissions,64%Overall Acceptance Rate7of11submissions,64%

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader