skip to main content
10.1145/2483760.2483787acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections
research-article

Semi-valid input coverage for fuzz testing

Published:15 July 2013Publication History

ABSTRACT

We define semi-valid input coverage (SVCov), the first coverage criterion for fuzz testing. Our criterion is applicable whenever the valid inputs can be defined by a finite set of constraints. SVCov measures to what extent the tests cover the domain of semi-valid inputs, where an input is semi-valid if and only if it satisfies all the constraints but one.

We demonstrate SVCov's practical value in a case study on fuzz testing the Internet Key Exchange protocol (IKE). Our study shows that it is feasible to precisely define and efficiently measure SVCov. Moreover, SVCov provides essential information for improving the effectiveness of fuzz testing and enhancing fuzz-testing tools and libraries. In particular, by increasing coverage under SVCov, we have discovered a previously unknown vulnerability in a mature IKE implementation.

References

  1. T. Alrahem, A. Chen, N. DiGiuseppe, J. Gee, S.-P. Hsiao, S. Mattox, T. Park, and I. Harris. INTERSTATE: A Stateful Protocol Fuzzer for SIP. In Defcon 15, pages 1–5, August 2007.Google ScholarGoogle Scholar
  2. T. Alrahem and I. Harris. Achieving Domain Coverage with Directed Fuzz Testing. Technical report, University of California, Irvine, CA, USA, June 2007.Google ScholarGoogle Scholar
  3. G. Banks, M. Cova, V. Felmetsger, K. C. Almeroth, R. A. Kemmerer, and G. Vigna. SNOOZE: Toward a Stateful NetwOrk prOtocol fuzZEr. In ISC, pages 343–358, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. S. Becker, H. Abdelnur, J. L. Obes, R. State, and O. Festor. Improving Fuzz Testing Using Game Theory. In Proceedings of the 2010 Fourth International Conference on Network and System Security, NSS ’10, pages 263–268, Washington, DC, USA, 2010. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. E. Bouwers, J. Visser, and A. van Deursen. Getting What You Measure. Commun. ACM, 55(7):54–59, July 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. C. Cadar, D. Dunbar, and D. Engler. KLEE: Unassisted and Automatic Generation of High-coverage Tests for Complex Systems Programs. In Proceedings of the 8th USENIX conference on Operating systems design and implementation, OSDI’08, pages 209–224, Berkeley, CA, USA, 2008. USENIX Association. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. L. A. Clarke, A. Podgurski, D. J. Richardson, and S. J. Zeil. A Formal Evaluation of Data Flow Path Selection Criteria. IEEE Trans. Softw. Eng., 15(11):1318–1332, Nov. 1989. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. W. Drewry and T. Ormandy. Flayer: Exposing Application Internals. In Proceedings of the first USENIX workshop on Offensive Technologies, WOOT ’07, pages 1–9, Berkeley, CA, USA, 2007. USENIX Association. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. V. Ganesh, T. Leek, and M. Rinard. Taint-based Directed Whitebox Fuzzing. In Proceedings of the 31st International Conference on Software Engineering, ICSE ’09, pages 474–484, Washington, DC, USA, 2009. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. P. Godefroid, A. Kiezun, and M. Y. Levin. Grammar-based Whitebox Fuzzing. In Proceedings of the 2008 ACM SIGPLAN conference on Programming language design and implementation, PLDI ’08, pages 206–215, New York, NY, USA, 2008. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. P. Godefroid, N. Klarlund, and K. Sen. DART: Directed Automated Random Testing. In Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation, PLDI ’05, pages 213–223, New York, NY, USA, 2005. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. P. Godefroid, M. Y. Levin, and D. Molnar. SAGE: Whitebox Fuzzing for Security Testing. Queue, 10(1):20:20–20:27, Jan. 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. D. Harkins and D. Carrel. The Internet Key Exchange (IKE). RFC 2409 (Proposed Standard), Nov. 1998. Obsoleted by RFC 4306, updated by RFC 4109. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. IEEE Standards Board. IEEE standard glossary of software engineering terminology, September 1990. Std 610.12-1990.Google ScholarGoogle Scholar
  15. R. Kaksonen. Test Coverage in Model-based Fuzz Testing, 2012. Presented at Model-based User Conference, Tallinn, Estonia.Google ScholarGoogle Scholar
  16. R. Kaksonen, M. Laakso, and A. Takanen. System Security Assessment through Specification Mutations and Fault Injection. In Proceedings of the IFIP TC6/TC11 International Conference on Communications and Multimedia Security Issues of the New Century, pages 27–, Deventer, The Netherlands, 2001. Kluwer, B.V. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. D. Maughan, M. Schertler, M. Schneider, and J. Turner. Internet Security Association and Key Management Protocol (ISAKMP). RFC 2408 (Proposed Standard), Nov. 1998. Obsoleted by RFC 4306. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Memcheck: A Memory Error Detector. http://valgrind.org.Google ScholarGoogle Scholar
  19. B. P. Miller, L. Fredriksen, and B. So. An Empirical Study of the Reliability of UNIX Utilities. Commun. ACM, 33:32–44, December 1990. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. G. Myers, C. Sandler, and T. Badgett. The Art of Software Testing. ITPro collection. Wiley, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. N. Nethercote and J. Seward. Valgrind: A Framework for Heavyweight Dynamic Binary Instrumentation. ACM Sigplan Notices, 42(6):89–100, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. P. Oehlert. Violating Assumptions with Fuzzing. IEEE Security and Privacy, 3(2):58–62, Mar. 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. OpenSwan. https://www.openswan.org.Google ScholarGoogle Scholar
  24. L. Opstad, J. Shirk, and D. Weinstein. Fuzzed Enough? When It’s OK to Put the Shears Down, 2008. Presented at BlueHat v8, Redmond, WA, USA.Google ScholarGoogle Scholar
  25. D. Piper. The Internet IP Security Domain of Interpretation for ISAKMP. RFC 2407 (Proposed Standard), Nov. 1998. Obsoleted by RFC 4306. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. RTCA/DO-178B. Software Considerations in Airborne Systems and Equipment Certification, December 1992.Google ScholarGoogle Scholar
  27. J. Rushby. Automated Test Generation and Verified Software. In B. Meyer and J. Woodcock, editors, Verified Software: Theories, Tools, Experiments, pages 161–172. Springer-Verlag, Berlin, Heidelberg, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Scapy. http://www.secdev.org/projects/scapy/.Google ScholarGoogle Scholar
  29. L. Shan and H. Zhu. Generating Structurally Complex Test Cases By Data Mutation: A Case Study Of Testing An Automated Modelling Tool. Comput. J., 52(5):571–588, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. M. Staats, G. Gay, M. Whalen, and M. Heimdahl. On the Danger of Coverage Directed Test Case Generation. In Proceedings of the 15th international conference on Fundamental Approaches to Software Engineering, FASE’12, pages 409–424, Berlin, Heidelberg, 2012. Springer-Verlag. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. M. Sutton, A. Greene, and P. Amini. Fuzzing: Brute Force Vulnerability Discovery. Addison-Wesley, 1 edition, July 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. A. Takanen, J. DeMott, and C. Miller. Fuzzing for Software Security Testing and Quality Assurance. Artech House, Inc., Norwood, MA, USA, 1 edition, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. The MITRE Corporation. CWE-List (2.1), Sept. 2011.Google ScholarGoogle Scholar
  34. P. Tsankov, M. Torabi Dashti, and D. Basin. SecFuzz: Fuzz-testing Security Protocols. In 7th International Workshop on Automation of Software Test (AST ’12), pages 1–7. IEEE, June 2012.Google ScholarGoogle Scholar
  35. E. J. Weyuker. Axiomatizing Software Test Data Adequacy. IEEE Trans. Softw. Eng., 12(12):1128–1138, Dec. 1986. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. H. Zhu, P. A. V. Hall, and J. H. R. May. Software Unit Test Coverage and Adequacy. ACM Comput. Surv., 29(4):366–427, Dec. 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Semi-valid input coverage for fuzz testing

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        ISSTA 2013: Proceedings of the 2013 International Symposium on Software Testing and Analysis
        July 2013
        381 pages
        ISBN:9781450321594
        DOI:10.1145/2483760

        Copyright © 2013 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 15 July 2013

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate16of69submissions,23%

        Upcoming Conference

        ISSTA '24

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader