ABSTRACT
Recent years have seen a trend towards the notion of quantitative security assessment and the use of empirical methods to analyze or predict vulnerable components. Many papers focused on vulnerability discovery models based upon either a public vulnerability databases (e.g., CVE, NVD), or vendor ones (e.g., MFSA). Some combine these databases. Most of these works address a knowledge problem: can we understand the empirical causes of vulnerabilities? Can we predict them? Still, if the data sources do not completely capture the phenomenon we are interested in predicting, then our predictor might be optimal with respect to the data we have but unsatisfactory in practice.
In our work, we focus on a more fundamental question: the quality of vulnerability database. We provide an analytical comparison of different security metric papers and the relative data sources. We also show, based on experimental data for Mozilla Firefox, how using different data sources might lead to completely different results.
- }}O. Alhazmi and Y. Malaiya. Modeling the vulnerability discovery process. In Proc. of ISSRE'05, pages 129--138, 2005. Google ScholarDigital Library
- }}O. Alhazmi and Y. Malaiya. Application of vulnerability discovery models to major operating systems. IEEE Trans. on Reliab., 57(1):14--22, 2008.Google ScholarCross Ref
- }}O. Alhazmi, Y. Malaiya, and I. Ray. Measuring, analyzing and predicting security vulnerabilities in software systems. Comp. & Sec., 26(3):219--228, 2007.Google ScholarDigital Library
- }}R. Anderson. Security in open versus closed systems - the dance of Boltzmann, Coase and Moore. In Proc. of Open Source Soft.: Economics, Law and Policy, 2002.Google Scholar
- }}C. Catal and B. Diri. A systematic review of software fault prediction studies. Expert Sys. with App., 36(4):7346--7354, 2009. Google ScholarDigital Library
- }}I. Chowdhury and M. Zulkernine. Using complexity, coupling, and cohesion metrics as early predictors of vul. J. of Soft. Arch., 2010. Google ScholarDigital Library
- }}S. Frei, T. Duebendorfer, and B. Plattner. Firefox (in) security update dynamics exposed. ACM SIGCOMM Comp. Comm. Rev., 39(1):16--22, 2009. Google ScholarDigital Library
- }}M. Gegick, P. Rotella, and L. Williams. Toward non-security failures as a predictor of security faults and failures. Eng. Secure Soft. and Sys., 5429:135--149, 2009. Google ScholarDigital Library
- }}M. Gegick, P. Rotella, and L. A. Williams. Predicting attack-prone components. In Proc. of IEEE ICST'09, pages 181--190, 2009. Google ScholarDigital Library
- }}L. A. Gordon and M. P. Loeb. Managing Cybersecurity Resources: a Cost-Benefit Analysis. McGraw Hill, 2006. Google ScholarDigital Library
- }}A. Jaquith. Security Metrics: Replacing Fear, Uncertainty, and Doubt. Addison-Wesley Professional, 2007. Google ScholarDigital Library
- }}Y. Jiang, B. Cuki, T. Menzies, and N. Bartlow. Comparing design and code metrics for software quality prediction. In Proc. of PROMISE'08, pages 11--18. ACM, 2008. Google ScholarDigital Library
- }}P. Manadhata, J. Wing, M. Flynn, and M. McQueen. Measuring the attack surfaces of two ftp daemons. In Proc. of QoP'06, 2006. Google ScholarDigital Library
- }}A. Meneely and L. Williams. Secure open source collaboration: An empirical study of linus' law. In Proc. of CCS'09, 2009. Google ScholarDigital Library
- }}T. Menzies, J. Greenwald, and A. Frank. Data mining static code attributes to learn defect predictors. TSE, 33(9):2--13, 2007. Google ScholarDigital Library
- }}N. Nagappan and T. Ball. Use of relative code churn measures to predict system defect density. In Proc. of ICSE'05, pages 284--292, 2005. Google ScholarDigital Library
- }}S. Neuhaus, T. Zimmermann, C. Holler, and A. Zeller. Predicting vulnerable software components. In Proc. of CCS'07, pages 529--540, October 2007. Google ScholarDigital Library
- }}H. M. Olague, S. Gholston, and S. Quattlebaum. Empirical validation of three software metrics suites to predict fault-proneness of object-oriented classes developed using highly iterative or agile software development processes. TSE, 33(6):402--419, 2007. Google ScholarDigital Library
- }}A. Ozment. The likelihood of vulnerability rediscovery and the social utility of vulnerability hunting. In Proc. of 4th Annual Workshop on Economics and Inform. Sec. (WEIS'05), 2005.Google Scholar
- }}A. Ozment. Software security growth modeling: Examining vulnerabilities with reliability growth models. In Proc. of QoP'06, 2006.Google ScholarCross Ref
- }}A. Ozment and S. E. Schechter. Milk or wine: Does software security improve with age? In Proc. of USENIX'06, 2006. Google ScholarDigital Library
- }}E. Rescorla. Is finding security holes a good idea? IEEE Sec. and Privacy, 3(1):14--19, 2005. Google ScholarDigital Library
- }}Y. Shin and L. Williams. An empirical model to predict security vulnerabilities using code complexity metrics. In Proc. of ESEM'08, 2008. Google ScholarDigital Library
- }}Y. Shin and L. Williams. Is complexity really the enemy of software security? In Proc. of QoP'08, pages 47--50, 2008. Google ScholarDigital Library
- }}J. Sliwerski, T. Zimmermann, and A. Zeller. When do changes induce fixes? In Proc. of the 2nd Int. Working Conf. on Mining Soft. Repo. MSR('05), pages 24--28, May 2005. Google ScholarDigital Library
- }}H. Zhang and X. Zhang. Comments on data mining static code attributes to learn defect predictors. TSE, 33(9):635--637, 2007. Google ScholarDigital Library
- }}H. Zhang, X. Zhang, and M. Gu. Predicting defective software components from code complexity measures. In Procc. of PRDC'07, pages 93--96, 2007. Google ScholarDigital Library
- }}T. Zimmermann and N. Nagappan. Predicting defects with program dependencies. In Proc. of ESEM'09, 2009. Google ScholarDigital Library
- }}T. Zimmermann, R. Premraj, and A. Zeller. Predicting defects for eclipse. In Proc. of PROMISE'07, page 9. IEEE Computer Society, 2007. Google ScholarDigital Library
- }}T. Zimmermann and P. WeiSSgerber. Preprocessing cvs data for fine-grained analysis. In Proc. of the 1st Int. Working Conf. on Mining Soft. Repo. MSR('04), pages 2--6, May 2004.Google ScholarCross Ref
Index Terms
- Which is the right source for vulnerability studies?: an empirical analysis on Mozilla Firefox
Recommendations
Holographic vulnerability studies: vulnerabilities as fractures in interpretation as information flows across abstraction boundaries
NSPW '12: Proceedings of the 2012 New Security Paradigms WorkshopWe are always patching our systems against specific nstances of whatever the latest new, hot, trendy vulnerability type is. First it was time-of-check-to-time-of-use, then buffer overflows, then SQL injection, then cross-site scripting. Vulnerability ...
Side-channel vulnerability factor: a metric for measuring information leakage
ISCA '12There have been many attacks that exploit side-effects of program execution to expose secret information and many proposed countermeasures to protect against these attacks. However there is currently no systematic, holistic methodology for understanding ...
Side-channel vulnerability factor: a metric for measuring information leakage
ISCA '12: Proceedings of the 39th Annual International Symposium on Computer ArchitectureThere have been many attacks that exploit side-effects of program execution to expose secret information and many proposed countermeasures to protect against these attacks. However there is currently no systematic, holistic methodology for understanding ...
Comments