skip to main content
10.1145/1853919.1853925acmconferencesArticle/Chapter ViewAbstractPublication PagesesemConference Proceedingsconference-collections
research-article

Which is the right source for vulnerability studies?: an empirical analysis on Mozilla Firefox

Published:15 September 2010Publication History

ABSTRACT

Recent years have seen a trend towards the notion of quantitative security assessment and the use of empirical methods to analyze or predict vulnerable components. Many papers focused on vulnerability discovery models based upon either a public vulnerability databases (e.g., CVE, NVD), or vendor ones (e.g., MFSA). Some combine these databases. Most of these works address a knowledge problem: can we understand the empirical causes of vulnerabilities? Can we predict them? Still, if the data sources do not completely capture the phenomenon we are interested in predicting, then our predictor might be optimal with respect to the data we have but unsatisfactory in practice.

In our work, we focus on a more fundamental question: the quality of vulnerability database. We provide an analytical comparison of different security metric papers and the relative data sources. We also show, based on experimental data for Mozilla Firefox, how using different data sources might lead to completely different results.

References

  1. }}O. Alhazmi and Y. Malaiya. Modeling the vulnerability discovery process. In Proc. of ISSRE'05, pages 129--138, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. }}O. Alhazmi and Y. Malaiya. Application of vulnerability discovery models to major operating systems. IEEE Trans. on Reliab., 57(1):14--22, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  3. }}O. Alhazmi, Y. Malaiya, and I. Ray. Measuring, analyzing and predicting security vulnerabilities in software systems. Comp. & Sec., 26(3):219--228, 2007.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. }}R. Anderson. Security in open versus closed systems - the dance of Boltzmann, Coase and Moore. In Proc. of Open Source Soft.: Economics, Law and Policy, 2002.Google ScholarGoogle Scholar
  5. }}C. Catal and B. Diri. A systematic review of software fault prediction studies. Expert Sys. with App., 36(4):7346--7354, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. }}I. Chowdhury and M. Zulkernine. Using complexity, coupling, and cohesion metrics as early predictors of vul. J. of Soft. Arch., 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. }}S. Frei, T. Duebendorfer, and B. Plattner. Firefox (in) security update dynamics exposed. ACM SIGCOMM Comp. Comm. Rev., 39(1):16--22, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. }}M. Gegick, P. Rotella, and L. Williams. Toward non-security failures as a predictor of security faults and failures. Eng. Secure Soft. and Sys., 5429:135--149, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. }}M. Gegick, P. Rotella, and L. A. Williams. Predicting attack-prone components. In Proc. of IEEE ICST'09, pages 181--190, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. }}L. A. Gordon and M. P. Loeb. Managing Cybersecurity Resources: a Cost-Benefit Analysis. McGraw Hill, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. }}A. Jaquith. Security Metrics: Replacing Fear, Uncertainty, and Doubt. Addison-Wesley Professional, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. }}Y. Jiang, B. Cuki, T. Menzies, and N. Bartlow. Comparing design and code metrics for software quality prediction. In Proc. of PROMISE'08, pages 11--18. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. }}P. Manadhata, J. Wing, M. Flynn, and M. McQueen. Measuring the attack surfaces of two ftp daemons. In Proc. of QoP'06, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. }}A. Meneely and L. Williams. Secure open source collaboration: An empirical study of linus' law. In Proc. of CCS'09, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. }}T. Menzies, J. Greenwald, and A. Frank. Data mining static code attributes to learn defect predictors. TSE, 33(9):2--13, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. }}N. Nagappan and T. Ball. Use of relative code churn measures to predict system defect density. In Proc. of ICSE'05, pages 284--292, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. }}S. Neuhaus, T. Zimmermann, C. Holler, and A. Zeller. Predicting vulnerable software components. In Proc. of CCS'07, pages 529--540, October 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. }}H. M. Olague, S. Gholston, and S. Quattlebaum. Empirical validation of three software metrics suites to predict fault-proneness of object-oriented classes developed using highly iterative or agile software development processes. TSE, 33(6):402--419, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. }}A. Ozment. The likelihood of vulnerability rediscovery and the social utility of vulnerability hunting. In Proc. of 4th Annual Workshop on Economics and Inform. Sec. (WEIS'05), 2005.Google ScholarGoogle Scholar
  20. }}A. Ozment. Software security growth modeling: Examining vulnerabilities with reliability growth models. In Proc. of QoP'06, 2006.Google ScholarGoogle ScholarCross RefCross Ref
  21. }}A. Ozment and S. E. Schechter. Milk or wine: Does software security improve with age? In Proc. of USENIX'06, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. }}E. Rescorla. Is finding security holes a good idea? IEEE Sec. and Privacy, 3(1):14--19, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. }}Y. Shin and L. Williams. An empirical model to predict security vulnerabilities using code complexity metrics. In Proc. of ESEM'08, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. }}Y. Shin and L. Williams. Is complexity really the enemy of software security? In Proc. of QoP'08, pages 47--50, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. }}J. Sliwerski, T. Zimmermann, and A. Zeller. When do changes induce fixes? In Proc. of the 2nd Int. Working Conf. on Mining Soft. Repo. MSR('05), pages 24--28, May 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. }}H. Zhang and X. Zhang. Comments on data mining static code attributes to learn defect predictors. TSE, 33(9):635--637, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. }}H. Zhang, X. Zhang, and M. Gu. Predicting defective software components from code complexity measures. In Procc. of PRDC'07, pages 93--96, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. }}T. Zimmermann and N. Nagappan. Predicting defects with program dependencies. In Proc. of ESEM'09, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. }}T. Zimmermann, R. Premraj, and A. Zeller. Predicting defects for eclipse. In Proc. of PROMISE'07, page 9. IEEE Computer Society, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. }}T. Zimmermann and P. WeiSSgerber. Preprocessing cvs data for fine-grained analysis. In Proc. of the 1st Int. Working Conf. on Mining Soft. Repo. MSR('04), pages 2--6, May 2004.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Which is the right source for vulnerability studies?: an empirical analysis on Mozilla Firefox

                Recommendations

                Comments

                Login options

                Check if you have access through your login credentials or your institution to get full access on this article.

                Sign in
                • Published in

                  cover image ACM Conferences
                  MetriSec '10: Proceedings of the 6th International Workshop on Security Measurements and Metrics
                  September 2010
                  78 pages
                  ISBN:9781450303408
                  DOI:10.1145/1853919

                  Copyright © 2010 ACM

                  Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

                  Publisher

                  Association for Computing Machinery

                  New York, NY, United States

                  Publication History

                  • Published: 15 September 2010

                  Permissions

                  Request permissions about this article.

                  Request Permissions

                  Check for updates

                  Qualifiers

                  • research-article

                PDF Format

                View or Download as a PDF file.

                PDF

                eReader

                View online with eReader.

                eReader