skip to main content
10.1145/2695664.2695811acmconferencesArticle/Chapter ViewAbstractPublication PagessacConference Proceedingsconference-collections
research-article

Test coverage and impact analysis for detecting refactoring faults: a study on the extract method refactoring

Published:13 April 2015Publication History

ABSTRACT

Refactoring validation by automated testing is a common practice in agile development processes. However, this practice can be misleading when the test suite is not adequate. Particularly, refactoring faults can be tricky and difficult to detect. While coverage analysis is a standard practice to evaluate a test suite's fault detection capability, there is usually low correlation between coverage and fault detection. In this paper, we present an exploratory study on coverage of refactoring-impacted code, in order to identify shortcomings of test suites, focusing on the Extract Method Refactoring. We consider three open-source projects and their test suites. The results show that, in most cases, the lacking of test case calling the method changed in the refactoring increases the chance of missing faults. Also, a high proportion of test cases that do not cover the callers of that method does not reveal the fault either. Additional analysis of branch coverage on the test cases exercising impacted elements show a higher chance of detecting a fault when branch coverage is also high. It seems reasonable to conclude that a combination of impact analysis with branch coverage could be highly effective in detecting faults introduced by Extract Method.

References

  1. E. L. Alves, P. D. Machado, T. Massoni, and S. T. Santos. A refactoring-based approach for test case 1539 selection and prioritization. In Automation of Software Test (AST), 2013 8th International Workshop on, pages 93--99. IEEE, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. E. L. Alves, M. Song, and M. Kim. Refdistiller: A refactoring aware code review tool for inspecting manual refactoring edits. In Proceedings of the The 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, Research Demonstration Track (To appear). ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. L. Briand and D. Pfahl. Using simulation for assessing the real impact of test coverage on defect coverage. In 10th International Symposium on Software Reliability Engineering, pages 148--157. IEEE, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. X. Cai and M. R. Lyu. The effect of code coverage on fault detection under different testing profiles. In ACM SIGSOFT Software Engineering Notes, volume 30, pages 1--7. ACM, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. M. Cornélio, A. Cavalcanti, and A. Sampaio. Sound refactorings. Science of Computer Programming, 75(3):106--133, 2010.Google ScholarGoogle ScholarCross RefCross Ref
  6. B. Daniel, D. Dig, K. Garcia, and D. Marinov. Automated testing of refactoring engines. In Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering, pages 185--194. ACM, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. D. Dig and R. Johnson. The role of refactorings in api evolution. In Software Maintenance, 2005. ICSM'05. Proceedings of the 21st IEEE International Conference on, pages 389--398. IEEE, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. M. Fowler, K. Beck, J. Brant, W. Opdyke, and D. Roberts. Refactoring: Improving the design of existing programs, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. P. G. Frankl and O. Iakounenko. Further empirical studies of test effectiveness. In ACM SIGSOFT Software Engineering Notes, volume 23, pages 153--162. ACM, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. P. G. Frankl and S. N. Weiss. An experimental comparison of the effectiveness of the all-uses and all-edges adequacy criteria. In Proceedings of the symposium on Testing, analysis, and verification, pages 154--164. ACM, 1991. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. P. G. Frankl, S. N. Weiss, and C. Hu. All-uses vs mutation testing: an experimental comparison of effectiveness. Journal of Systems and Software, 38(3):235--253, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. X. Ge and E. Murphy-Hill. Manual refactoring changes with automated refactoring validation. In Proceedings of the 36th International Conference on Software Engineering, pages 1095--1105. ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. M. Gligoric, A. Groce, C. Zhang, R. Sharma, M. A. Alipour, and D. Marinov. Comparing non-adequate test suites using coverage criteria. In Proceedings of the 2013 International Symposium on Software Testing and Analysis, pages 302--313. ACM, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. L. Inozemtseva and R. Holmes. Coverage is not strongly correlated with test suite effectiveness. In ICSE, pages 435--445, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. M. Kim, D. Cai, and S. Kim. An empirical investigation into the role of api-level refactorings during software evolution. In Proceedings of the 33rd International Conference on Software Engineering, pages 151--160. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. M. Kim, T. Zimmermann, and N. Nagappan. A field study of refactoring challenges and benefits. In Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering, page 50. ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Y. Y. Lee, N. Chen, and R. E. Johnson. Drag-and-drop refactoring: intuitive and efficient program transformation. In Proceedings of the 2013 International Conference on Software Engineering, pages 23--32. IEEE Press, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. G. C. Murphy, M. Kersten, and L. Findlater. How are java software developers using the elipse ide? Software, IEEE, 23(4):76--83, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. S. Negara, N. Chen, M. Vakilian, R. E. Johnson, and D. Dig. A comparative study of manual and automated refactorings. In ECOOP 2013--Object-Oriented Programming, pages 552--576. Springer, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. W. E. Perry. Effective Methods for Software Testing: Includes Complete Guidelines, Checklists, and Templates. John Wiley & Sons, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. X. Ren, F. Shah, F. Tip, B. G. Ryder, and O. Chesley. Chianti: a tool for change impact analysis of java programs. In ACM Sigplan Notices, volume 39, pages 432--448. ACM, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. B. G. Ryder and F. Tip. Change impact analysis for object-oriented programs. In Proceedings of the 2001 ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, pages 46--53. ACM, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. G. Soares, B. Catao, C. Varjao, S. Aguiar, R. Gheyi, and T. Massoni. Analyzing refactorings on software repositories. In Software Engineering (SBES), 25th Brazilian Symposium on, pages 164--173. IEEE, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. G. Soares, R. Gheyi, and T. Massoni. Automated behavioral testing of refactoring engines. Software Engineering, IEEE Transactions on, 39(2):147--162, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. G. Soares, R. Gheyi, D. Serey, and T. Massoni. Making program refactoring safer. Software, IEEE, 27(4):52--57, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. P. Weißgerber and S. Diehl. Are refactorings less error-prone than other changes? In Proceedings of the 2006 international workshop on Mining software repositories, pages 112--118. ACM, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. W. E. Wong, J. R. Horgan, S. London, and A. P. Mathur. Effect of test set size and block coverage on the fault detection effectiveness. In Software Reliability Engineering, 1994. Proceedings., 5th International Symposium on, pages 230--238. IEEE, 1994.Google ScholarGoogle ScholarCross RefCross Ref
  28. Z. Xing and E. Stroulia. Umldiff: an algorithm for object-oriented design differencing. In Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering, pages 54--65. ACM, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. L. Zhang, M. Kim, and S. Khurshid. Localizing failure-inducing program edits based on spectrum information. In Software Maintenance (ICSM), 2011 27th IEEE International Conference on, pages 23--32. IEEE, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Test coverage and impact analysis for detecting refactoring faults: a study on the extract method refactoring

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          SAC '15: Proceedings of the 30th Annual ACM Symposium on Applied Computing
          April 2015
          2418 pages
          ISBN:9781450331968
          DOI:10.1145/2695664

          Copyright © 2015 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 13 April 2015

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          SAC '15 Paper Acceptance Rate291of1,211submissions,24%Overall Acceptance Rate1,650of6,669submissions,25%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader