skip to main content
10.1145/2642937.2642986acmconferencesArticle/Chapter ViewAbstractPublication PagesaseConference Proceedingsconference-collections
research-article

Automated unit test generation for classes with environment dependencies

Published:15 September 2014Publication History

ABSTRACT

Automated test generation for object-oriented software typically consists of producing sequences of calls aiming at high code coverage. In practice, the success of this process may be inhibited when classes interact with their environment, such as the file system, network, user-interactions, etc. This leads to two major problems: First, code that depends on the environment can sometimes not be fully covered simply by generating sequences of calls to a class under test, for example when execution of a branch depends on the contents of a file. Second, even if code that is environment-dependent can be covered, the resulting tests may be unstable, i.e., they would pass when first generated, but then may fail when executed in a different environment. For example, tests on classes that make use of the system time may have failing assertions if the tests are executed at a different time than when they were generated.

In this paper, we apply bytecode instrumentation to automatically separate code from its environmental dependencies, and extend the EvoSuite Java test generation tool such that it can explicitly set the state of the environment as part of the sequences of calls it generates. Using a prototype implementation, which handles a wide range of environmental interactions such as the file system, console inputs and many non-deterministic functions of the Java virtual machine (JVM), we performed experiments on 100 Java projects randomly selected from SourceForge (the SF100 corpus). The results show significantly improved code coverage - in some cases even in the order of +80%/+90%. Furthermore, our techniques reduce the number of unstable tests by more than 50%.

References

  1. A. Arcuri and L. Briand. A hitchhiker's guide to statistical tests for assessing randomized algorithms in software engineering. Software Testing, Verification and Reliability (STVR), 2012. DOI: 10.1002/stvr.1486.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. A. Arcuri, M. Z. Iqbal, and L. Briand. Black-box system testing of real-time embedded systems using random and search-based testing. In IFIP International Conference on Testing Software and Systems (ICTSS), pages 95--110, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. A. Arcuri, M. Z. Iqbal, and L. Briand. Random testing: Theoretical results and practical implications. IEEE Transactions on Software Engineering (TSE), 38(2):258--277, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. J. Bell and G. Kaiser. Unit test virtualization with VMVM. In Proceedings of the 36th International Conference on Software Engineering, ICSE 2014, pages 550--561, New York, NY, USA, 2014. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. C. Csallner and Y. Smaragdakis. JCrasher: an automatic robustness tester for Java. Softw. Pract. Exper., 34:1025--1050, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. J. de Halleux and N. Tillmann. Moles: tool-assisted environment isolation with closures. In Objects, Models, Components, Patterns, pages 253--270. Springer, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. G. Fraser and A. Arcuri. EvoSuite: Automatic test suite generation for object-oriented software. In ACM Symposium on the Foundations of Software Engineering (FSE), pages 416--419, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. G. Fraser and A. Arcuri. Sound empirical evidence in software testing. In ACM/IEEE International Conference on Software Engineering (ICSE), pages 178--188, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. G. Fraser and A. Arcuri. 1600 faults in 100 projects: Automatically finding faults while achieving high coverage with evosuite. Empirical Software Engineering (EMSE), 2013. To appear.Google ScholarGoogle Scholar
  10. G. Fraser and A. Arcuri. Whole test suite generation. IEEE Transactions on Software Engineering, 39(2):276--291, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. G. Fraser and A. Arcuri. Achieving scalable mutation-based generation of whole test suites. Empirical Software Engineering (EMSE), 2014. To appear.Google ScholarGoogle Scholar
  12. G. Fraser and A. Zeller. Mutation-driven generation of unit tests and oracles. IEEE Transactions on Software Engineering (TSE), 28(2):278--292, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. J. P. Galeotti, G. Fraser, and A. Arcuri. Improving search-based test suite generation with dynamic symbolic execution. In IEEE International Symposium on Software Reliability Engineering (ISSRE), 2013.Google ScholarGoogle ScholarCross RefCross Ref
  14. S. J. Galler, A. Maller, and F. Wotawa. Automatically extracting mock object behavior from Design by Contract specification for test data generation. In Proceedings of the 5th Workshop on Automation of Software Test, pages 43--50, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. P. Godefroid, N. Klarlund, and K. Sen. DART: directed automated random testing. In PLDI'05: Proceedings of the 2005 ACM SIGPLAN Conference on Programming Language Design and Implementation, pages 213--223. ACM, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. M. Islam and C. Csallner. Dsc+mock: A test case + mock class generator in support of coding against interfaces. In International Workshop on Dynamic Analysis (WODA), pages 26--31, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. M. R. Marri, T. Xie, N. Tillmann, J. De Halleux, and W. Schulte. An empirical study of testing file-system-dependent software with mock objects. In Automation of Software Test, 2009. AST'09. ICSE Workshop on, pages 149--153, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  18. P. McMinn. Search-based software test data generation: A survey. Software Testing, Verification and Reliability, 14(2):105--156, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. D. Saff and M. D. Ernst. Mock object creation for test factoring. In Proceedings of the 5th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, pages 49--51, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. K. Taneja, Y. Zhang, and T. Xie. Moda: Automated test generation for database applications via mock objects. In IEEE/ACM Int. Conference on Automated Software Engineering (ASE), pages 289--292, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. N. Tillmann and W. Schulte. Parameterized unit tests. In Proc. of the 10th European Software Engineering Conference and 13th ACM SIGSOFT Int. Symposium on Foundations of Software Engineering, ESEC/FSE-13, pages 253--262, New York, NY, USA, 2005. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Automated unit test generation for classes with environment dependencies

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ASE '14: Proceedings of the 29th ACM/IEEE International Conference on Automated Software Engineering
      September 2014
      934 pages
      ISBN:9781450330138
      DOI:10.1145/2642937

      Copyright © 2014 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 15 September 2014

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      ASE '14 Paper Acceptance Rate82of337submissions,24%Overall Acceptance Rate82of337submissions,24%

      Upcoming Conference

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader