skip to main content
10.1145/2786805.2786844acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article

Turning programs against each other: high coverage fuzz-testing using binary-code mutation and dynamic slicing

Published:30 August 2015Publication History

ABSTRACT

Mutation-based fuzzing is a popular and widely employed black-box testing technique for finding security and robustness bugs in software. It owes much of its success to its simplicity; a well-formed seed input is mutated, e.g. through random bit-flipping, to produce test inputs. While reducing the need for human effort, and enabling security testing even of closed-source programs with undocumented input formats, the simplicity of mutation-based fuzzing comes at the cost of poor code coverage. Often millions of iterations are needed, and the results are highly dependent on configuration parameters and the choice of seed inputs. In this paper we propose a novel method for automated generation of high-coverage test cases for robustness testing. Our method is based on the observation that, even for closed-source programs with proprietary input formats, an implementation that can generate well-formed inputs to the program is typically available. By systematically mutating the program code of such generating programs, we leverage information about the input format encoded in the generating program to produce high-coverage test inputs, capable of reaching deep states in the program under test. Our method works entirely at the machine-code level, enabling use-cases similar to traditional black-box fuzzing. We have implemented the method in our tool MutaGen, and evaluated it on 7 popular Linux programs. We found that, for most programs, our method improves code coverage by one order of magnitude or more, compared to two well-known mutation-based fuzzers. We also found a total of 8 unique bugs.

References

  1. Chrome reward program rules. http://www.google. com/about/appsecurity/chrome-rewards/.Google ScholarGoogle Scholar
  2. A crash course to radamsa. https://code.google.com/p/ouspg/wiki/Radamsa.Google ScholarGoogle Scholar
  3. Immunity inc - resources. http: //www.immunityinc.com/resources/index.html.Google ScholarGoogle Scholar
  4. Microsoft bounty programs. https://technet. microsoft.com/en-us/library/dn425036.aspx.Google ScholarGoogle Scholar
  5. Sdl process: Verification. http://www.microsoft. com/security/sdl/process/verification.aspx.Google ScholarGoogle Scholar
  6. zzuf - multi-purpose fuzzer. http://caca.zoy.org/wiki/zzuf.Google ScholarGoogle Scholar
  7. B. Arkin. Adobe reader and acrobat security initiative. http://blogs.adobe.com/security/2009/ 05/adobe_reader_and_acrobat_secur.html, 2009.Google ScholarGoogle Scholar
  8. J. Caballero, H. Yin, Z. Liang, and D. Song. Polyglot: Automatic extraction of protocol message format using dynamic binary analysis. In Proceedings of the 14th ACM Conference on Computer and Communications Security, pages 317–329, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. S. K. Cha, T. Avgerinos, A. Rebert, and D. Brumley. Unleashing mayhem on binary code. In Proceedings of the 2012 IEEE Symposium on Security and Privacy, pages 380–394, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. P. Comparetti, G. Wondracek, C. Kruegel, and E. Kirda. Prospex: Protocol specification extraction. In Proceedings of the 2009 30th IEEE Symposium on Security and Privacy, pages 110–125, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. W. Cui, M. Peinado, K. Chen, H. J. Wang, and L. Irun-Briz. Tupni: Automatic reverse engineering of input formats. In Proceedings of the 15th ACM Conference on Computer and Communications Security, pages 391–402, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. C. Evans, M. Moore, and T. Ormandy. Google online security blog – fuzzing at scale. http://googleonlinesecurity.blogspot.se/2011/ 08/fuzzing-at-scale.html, 2011.Google ScholarGoogle Scholar
  13. M. Finifter, D. Akhawe, and D. Wagner. An empirical study of vulnerability rewards programs. In Proceedings of the 22nd USENIX Security Symposium, pages 273–288, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. P. Godefroid, M. Y. Levin, and D. A. Molnar. Automated whitebox fuzz testing. In Proceedings of the Network and Distributed System Security Symposium, 2008.Google ScholarGoogle Scholar
  15. C. Holler, K. Herzig, and A. Zeller. Fuzzing with code fragments. In Proceedings of the 21st USENIX Security Symposium, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. A. D. Householder and J. M. Foote. Probability-based parameter selection for black-box fuzz testing. Technical report, CERT, 2012.Google ScholarGoogle Scholar
  17. Y. Jia and M. Harman. An analysis and survey of the development of mutation testing. IEEE Transactions on Software Engineering, 37(5):649–678, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. U. Kargén and N. Shahmehri. Efficient utilization of secondary storage for scalable dynamic slicing. In Proceedings of the 2014 IEEE 14th International Working Conference on Source Code Analysis and Manipulation, pages 155–164, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. C.-K. Luk, R. Cohn, R. Muth, H. Patil, A. Klauser, G. Lowney, S. Wallace, V. J. Reddi, and K. Hazelwood. Pin: Building customized program analysis tools with dynamic instrumentation. In Proceedings of the 2005 ACM SIGPLAN Conference on Programming Language Design and Implementation, pages 190–200, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. B. P. Miller, L. Fredriksen, and B. So. An empirical study of the reliability of unix utilities. Commun. ACM, 33(12):32–44, Dec. 1990. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. B. P. Miller, D. Koski, C. P. Lee, V. Maganty, R. Murthy, A. Natarajan, and J. Steidl. Fuzz revisited: A re-examination of the reliability of unix utilities and services. Technical report, University of Wisconsin-Madison, Computer Sciences Department, 1995.Google ScholarGoogle Scholar
  22. C. Miller. Babysitting an army of monkeys. CanSecWest, 2010.Google ScholarGoogle Scholar
  23. C. Miller and Z. N. Peterson. Analysis of mutation and generation-based fuzzing. Technical report, Independent Security Evaluators, 2007.Google ScholarGoogle Scholar
  24. D. Molnar and L. Opstad. Effective fuzzing strategies. CERT vulnerability discovery workshop, 2010.Google ScholarGoogle Scholar
  25. N. Nethercote and J. Seward. How to shadow every byte of memory used by a program. In Proceedings of the 3rd International Conference on Virtual Execution Environments, pages 65–74, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. N. Nethercote and J. Seward. Valgrind: A framework for heavyweight dynamic binary instrumentation. In Proceedings of the 2007 ACM SIGPLAN Conference on Programming Language Design and Implementation, pages 89–100, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. A. J. Offutt, A. Lee, G. Rothermel, R. H. Untch, and C. Zapf. An experimental determination of sufficient mutant operators. ACM Trans. Softw. Eng. Methodol., 5(2):99–118, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. A. Rebert, S. K. Cha, T. Avgerinos, J. Foote, D. Warren, G. Grieco, and D. Brumley. Optimizing seed selection for fuzzing. In Proceedings of the 23rd USENIX Security Symposium, pages 861–875, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. A. Slowinska and H. Bos. Pointless tainting?: Evaluating the practicality of pointer tainting. In Proceedings of the 4th ACM European Conference on Computer Systems, pages 61–74, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. M. Sutton, A. Greene, and P. Amini. Fuzzing: brute force vulnerability discovery. Pearson Education, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. T. Wang, T. Wei, G. Gu, and W. Zou. Taintscope: A checksum-aware directed fuzzing tool for automatic software vulnerability detection. In Proceedings of the 2010 IEEE Symposium on Security and Privacy, pages 497–512, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. M. Woo, S. K. Cha, S. Gottlieb, and D. Brumley. Scheduling black-box mutational fuzzing. In Proceedings of the 2013 ACM SIGSAC Conference on Computer and Communications Security, pages 511–522, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. X. Yang, Y. Chen, E. Eide, and J. Regehr. Finding and understanding bugs in c compilers. In Proceedings of the 32nd ACM SIGPLAN Conference on Programming Language Design and Implementation, pages 283–294, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Turning programs against each other: high coverage fuzz-testing using binary-code mutation and dynamic slicing

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ESEC/FSE 2015: Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering
      August 2015
      1068 pages
      ISBN:9781450336758
      DOI:10.1145/2786805

      Copyright © 2015 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 30 August 2015

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate112of543submissions,21%

      Upcoming Conference

      FSE '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader