skip to main content
10.1145/2995306.2995311acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article

Beyond the Attack Surface: Assessing Security Risk with Random Walks on Call Graphs

Published: 28 October 2016 Publication History

Abstract

When reasoning about software security, researchers and practitioners use the phrase ``attack surface'' as a metaphor for risk. Enumerate and minimize the ways attackers can break in then risk is reduced and the system is better protected, the metaphor says. But software systems are much more complicated than their surfaces. We propose function- and file-level attack surface metrics---proximity and risky walk---that enable fine-grained risk assessment. Our risky walk metric is highly configurable: we use PageRank on a probability-weighted call graph to simulate attacker behavior of finding or exploiting a vulnerability. We provide evidence-based guidance for deploying these metrics, including an extensive parameter tuning study. We conducted an empirical study on two large open source projects, FFmpeg and Wireshark, to investigate the potential correlation between our metrics and historical post-release vulnerabilities. We found our metrics to be statistically significantly associated with vulnerable functions/files with a small-to-large Cohen's d effect size. Our prediction model achieved an increase of 36% (in FFmpeg) and 27% (in Wireshark) in the average value of F{2}-measure over a base model built with SLOC and coupling metrics. Our prediction model outperformed comparable models from prior literature with notable improvements: 58% reduction in false negative rate, 81% reduction in false positive rate, and 548% increase in F{2}-measure. These metrics advance vulnerability prevention by [(a)] being flexible in terms of granularity, performing better than vulnerability prediction literature, and being tunable so that practitioners can tailor the metrics to their products and better assess security risk.

References

[1]
K. Ali and O. Lhoták. Application-Only Call Graph Construction. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012.
[2]
M. Bernaschi, E. Gabrielli, and L. V. Mancini. Operating System Enhancements to Prevent the Misuse of System Calls. In Proc. 7th Conf. Computer and Communications Security, pages 174--183, New York, NY, USA, 2000. ACM.
[3]
U. Brandes and T. Erlebach. Network Analysis: Methodological Foundations, volume 3418. Springer-Verlag New York, Inc., 2005.
[4]
N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer. SMOTE: Synthetic Minority Over-sampling Technique. Journal of Artificial Intelligence Research, 16:321--357, 2002.
[5]
I. Chowdhury and M. Zulkernine. Using complexity, coupling, and cohesion metrics as early indicators of vulnerabilities. Journal of Systems Architecture, 57(3):294--313, 2011.
[6]
J. Cohen. Statistical power analysis for the behavioral sciences. Academic press, 2013.
[7]
A. Feldthaus, M. Schäfer, M. Sridharan, J. Dolby, and F. Tip. Efficient Construction of Approximate Call Graphs for JavaScript IDE Services. In Proc. 35th Int. Conf. Software Engineering, pages 752--761, May 2013.
[8]
M. Gegick, P. Rotella, and L. Williams. Predicting Attack-prone Components. Proc. Int. Conf. Software Testing Verification and Validation, pages 181--190, 2009.
[9]
M. Gegick, P. Rotella, and L. Williams. Toward Non-security Failures as a Predictor of Security Faults and Failures. Springer Berlin Heidelberg, Berlin, Heidelberg, 2009.
[10]
M. Gegick and L. Williams. Toward the use of automated static analysis alerts for early identification of vulnerability- and attack-prone components. Proc. 2nd Int. Conf. Internet Monitoring and Protection, 2007.
[11]
M. Gegick, L. Williams, J. Osborne, and M. Vouk. Prioritizing Software Security Fortification through Code-Level Metrics. Proc. 4th workshop Quality of Protection, pages 31--38, 2008.
[12]
D. Grove, G. DeFouw, J. Dean, and C. Chambers. Call Graph Construction in Object-oriented Languages. In Proc. 12th Conf. Object-oriented Programming, Systems, Languages, and Applications, pages 108--124, New York, NY, USA, 1997. ACM.
[13]
M. W. Hall and K. Kennedy. Efficient Call Graph Analysis. Letters on Programming Languages and Systems, 1(3):227--242, Sep 1992.
[14]
M. Howard. Fending Off Future Attacks by Reducing Attack Surface. http://msdn.microsoft.com/en-us/library/ms972812.aspx, 2003.
[15]
M. Howard, J. Pincus, and J. M. Wing. Measuring Relative Attack Surfaces. Springer, Boston, MA, 2005.
[16]
H. Kwak, C. Lee, H. Park, and S. Moon. What is Twitter, a Social Network or a News Media? In Proc. 19th Int. Conf. on World Wide Web, pages 591--600, New York, NY, USA, 2010. ACM.
[17]
O. Lhoták. Comparing Call Graphs. In Proc. 7th Workshop Program Analysis for Software Tools and Engineering, pages 37--42, New York, NY, USA, 2007. ACM.
[18]
S. Lipner. The Trustworthy Computing Security Development Lifecycle. In Proc. 20th Computer Security Applications Conference, pages 2--13, Dec 2004.
[19]
P. Manadhata. An Attack Surface Metric. PhD thesis, Carnegie Mellon Univ., 2008.
[20]
P. Manadhata, J. Wing, M. Flynn, and M. McQueen. Measuring the Attack Surfaces of Two FTP Daemons. In Proc. 2nd Workshop Quality of Protection, pages 3--10, New York, NY, USA, 2006. ACM.
[21]
P. K. Manadhata, Y. Karabulut, and J. M. Wing. Report: Measuring the Attack Surfaces of Enterprise Software. In Int. Symp. on Engineering Secure Software and Systems, pages 91--100. Springer, 2009.
[22]
P. K. Manadhata and J. M. Wing. An Attack Surface Metric. Transactions on Software Engineering, 37(3):371--386, May 2011.
[23]
P. Massa and P. Avesani. Trust-aware Recommender Systems. In Proc. 1st Conf. Recommender Systems, pages 17--24, New York, NY, USA, 2007. ACM.
[24]
F. Massacci and V. H. Nguyen. Which is the Right Source for Vulnerability Studies?: An Empirical Analysis on Mozilla Firefox. In Proc. 6th Int. Workshop Security Measurements and Metrics, pages 4:1--4:8, New York, NY, USA, 2010. ACM.
[25]
V. Mehta, C. Bartzis, H. Zhu, E. Clarke, and J. Wing. Ranking Attack Graphs. Springer Berlin Heidelberg, Berlin, Heidelberg, 2006.
[26]
A. Meneely, H. Srinivasan, A. Musa, A. R. Tejeda, M. Mokary, and B. Spates. When a Patch Goes Bad: Exploring the Properties of Vulnerability-Contributing Commits. In Proc. Int. Symp. Empirical Software Engineering and Measurement, pages 65--74. ACM, Oct 2013.
[27]
A. Meneely, A. C. R. Tejeda, B. Spates, S. Trudeau, D. Neuberger, K. Whitlock, C. Ketant, and K. Davis. An Empirical Investigation of Socio-technical Code Review Metrics and Security Vulnerabilities. Proc. 6th Int. Workshop Social Software Engineering, pages 37--44, 2014.
[28]
A. Meneely and L. Williams. Secure Open Source Collaboration: An Empirical Study of Linus' Law. In Proc. 16th Conf. Computer and Communications Security, pages 453--462, New York, NY, USA, 2009. ACM.
[29]
A. Meneely and L. Williams. Strengthening the Empirical Analysis of the Relationship Between Linus' Law and Software Security. In Proc. Int. Symp. Empirical Software Engineering and Measurement, pages 9:1--9:10, New York, NY, USA, 2010. ACM.
[30]
A. Meneely, L. Williams, W. Snipes, and J. Osborne. Predicting Failures with Developer Networks and Social Network Analysis. In Proc. 16th Int. Symp. Foundations of Software Engineering, pages 13--23, New York, NY, USA, 2008. ACM.
[31]
T. Menzies, J. Greenwald, T. Menzies, A. Dekhtyar, J. Distefano, and J. Greenwald. Problems with Precision: A Response to "Comments on 'Data Mining Static Code Attributes to Learn Defect Predictors'". 33:7--10, Nov 2007.
[32]
T. Menzies, Z. Milton, B. Turhan, B. Cukic, Y. Jiang, and A. Bener. Defect prediction from static code features: Current results, limitations, new approaches. Automated Software Engineering, 17(4):375--407, 2010.
[33]
N. Munaiah, F. Camilo, W. Wigham, A. Meneely, and M. Nagappan. Do bugs foreshadow vulnerabilities? An in-depth study of the chromium project. Empirical Software Engineering, pages 1--43, 2016.
[34]
N. Munaiah and A. Meneely. Attack Surface Meter. https://github.com/andymeneely/attack-surface-metrics. Accessed: 2016-01--31.
[35]
G. C. Murphy, D. Notkin, W. G. Griswold, and E. S. Lan. An Empirical Study of Static Call Graph Extractors. Transactions on Software Engineering and Methodology, 7(2):158--191, Apr 1998.
[36]
S. Neuhaus, T. Zimmermann, C. Holler, and A. Zeller. Predicting Vulnerable Software Components. In Proc. 14th Conf. Computer and Communications Security, pages 529--540, New York, NY, USA, 2007. ACM.
[37]
V. H. Nguyen and L. M. S. Tran. Predicting Vulnerable Software Components with Dependency Graphs. In Proc. 6th Int. Workshop Security Measurements and Metrics, pages 3:1--3:8, New York, NY, USA, 2010. ACM.
[38]
L. Page, S. Brin, R. Motwani, and T. Winograd. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab, Nov 1999.
[39]
M. Pinzger, N. Nagappan, and B. Murphy. Can Developer-module Networks Predict Failures? In Proc. 16th Int. Symp. Foundations of Software Engineering, pages 2--12, New York, NY, USA, 2008. ACM.
[40]
R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2015.
[41]
R. Scandariato, J. Walden, A. Hovsepyan, and W. Joosen. Predicting Vulnerable Software Components via Text Mining. Transactions on Software Engineering, 40(10):993--1006, Oct 2014.
[42]
O. Sheyner, J. Haines, S. Jha, R. Lippmann, and J. M. Wing. Automated Generation and Analysis of Attack Graphs. Symp. on Security and Privacy, pages 273--284, Jan 2002.
[43]
Y. Shin, A. Meneely, L. Williams, and J. A. Osborne. Evaluating Complexity, Code Churn, and Developer Activity Metrics as Indicators of Software Vulnerabilities. IEEE Transactions on Software Engineering, 37(6):772--787, Nov 2011.
[44]
Y. Shin and L. Williams. An Empirical Model to Predict Security Vulnerabilities Using Code Complexity Metrics. In Proc. 2nd Int. Symp. Empirical Software Engineering and Measurement, pages 315--317, New York, NY, USA, 2008. ACM.
[45]
Y. Shin and L. Williams. Can traditional fault prediction models be used for vulnerability prediction? Empirical Software Engineering, 18(1):25--59, 2013.
[46]
C. Theisen, K. Herzig, P. Morrison, B. Murphy, and L. Williams. Approximating Attack Surfaces with Stack Traces. In Proc. 37th Int. Conf. Software Engineering, pages 199--208, Piscataway, NJ, USA, 2015. IEEE.
[47]
J. J. Treinen and R. Thurimella. Application of the PageRank Algorithm to Alarm Graphs, pages 480--494. Springer Berlin Heidelberg, Berlin, Heidelberg, 2007.
[48]
J. Walden and M. Doyle. SAVI: Static-Analysis Vulnerability Indicator. IEEE Security & Privacy, 10(3):32--39, May 2012.
[49]
E. J. Weyuker, T. J. Ostrand, and R. M. Bell. Do too many cooks spoil the broth? Using the number of developers to enhance defect prediction models. Empirical Software Engineering, 13(5):539--559, 2008.
[50]
R. S. Wills. Google's PageRank: The Math Behind the Search Engine. The Mathematical Intelligencer, 28(4):6--11, 2008.
[51]
A. A. Younis and Y. K. Malaiya. Using Software Structure to Predict Vulnerability Exploitation Potential. In Proc. 8th Int. Conf. Software Security and Reliability-Companion, pages 13--18. IEEE, Jun 2014.
[52]
A. A. Younis, Y. K. Malaiya, and I. Ray. Using Attack Surface Entry Points and Reachability Analysis to Assess the Risk of Software Vulnerability Exploitability. In Proc. 15th Int. Symp. High-Assurance Systems Engineering, pages 1--8. IEEE, Jan 2014.
[53]
T. Zimmermann and N. Nagappan. Predicting Defects Using Network Analysis on Dependency Graphs. In Proc. 30th Int. Conf. Software Engineering, pages 531--540, New York, NY, USA, 2008. ACM.
[54]
T. Zimmermann, N. Nagappan, and L. Williams. Searching for a Needle in a Haystack: Predicting Security Vulnerabilities for Windows Vista. In Proc. 3rd Int. Conf. Software Testing, Verification and Validation, pages 421--428, Apr 2010.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SPRO '16: Proceedings of the 2016 ACM Workshop on Software PROtection
October 2016
100 pages
ISBN:9781450345767
DOI:10.1145/2995306
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 28 October 2016

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. attack surface
  2. metric
  3. page rank
  4. risk
  5. vulnerability

Qualifiers

  • Research-article

Funding Sources

  • National Security Agency

Conference

CCS'16
Sponsor:

Acceptance Rates

SPRO '16 Paper Acceptance Rate 8 of 14 submissions, 57%;
Overall Acceptance Rate 8 of 14 submissions, 57%

Upcoming Conference

CCS '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)20
  • Downloads (Last 6 weeks)2
Reflects downloads up to 08 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Attack Surface Measurement: A Weird Machines PerspectiveProceedings of the 2024 European Interdisciplinary Cybersecurity Conference10.1145/3655693.3655705(90-94)Online publication date: 5-Jun-2024
  • (2024)Call Graph Delta Analysis and Security Vulnerability Assessment with Static Analysis2024 IEEE 48th Annual Computers, Software, and Applications Conference (COMPSAC)10.1109/COMPSAC61105.2024.00387(2412-2417)Online publication date: 2-Jul-2024
  • (2024)A Survey on Secure RefactoringSN Computer Science10.1007/s42979-024-03325-y5:7Online publication date: 12-Oct-2024
  • (2022)Autonomous Driving Security: State of the Art and ChallengesIEEE Internet of Things Journal10.1109/JIOT.2021.31300549:10(7572-7595)Online publication date: 15-May-2022
  • (2021)A hierarchical model for quantifying software security based on static analysis alerts and software metricsSoftware Quality Journal10.1007/s11219-021-09555-0Online publication date: 15-May-2021
  • (2020)Enhanced Bug Prediction in JavaScript Programs with Hybrid Call-Graph Based Invocation MetricsTechnologies10.3390/technologies90100039:1(3)Online publication date: 30-Dec-2020
  • (2020)Technical debt as an indicator of software security risk: a machine learning approach for software development enterprisesEnterprise Information Systems10.1080/17517575.2020.182401716:5Online publication date: 24-Sep-2020
  • (2019)Data-driven insights from vulnerability discovery metricsProceedings of the Joint 4th International Workshop on Rapid Continuous Software Engineering and 1st International Workshop on Data-Driven Decisions, Experimentation and Evolution10.1109/RCoSE/DDrEE.2019.00008(1-7)Online publication date: 27-May-2019
  • (2018)Assisted discovery of software vulnerabilitiesProceedings of the 40th International Conference on Software Engineering: Companion Proceeedings10.1145/3183440.3183453(464-467)Online publication date: 27-May-2018
  • (2018)BP: Profiling Vulnerabilities on the Attack Surface2018 IEEE Cybersecurity Development (SecDev)10.1109/SecDev.2018.00022(110-119)Online publication date: Sep-2018

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media