skip to main content
10.1145/1390630.1390639acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections
research-article

Using sensitivity analysis to create simplified economic models for regression testing

Published:20 July 2008Publication History

ABSTRACT

Software engineering methodologies are subject to complex cost-benefit tradeoffs. Economic models can help practitioners and researchers assess methodologies relative to these tradeoffs. Effective economic models, however, can be established only through an iterative process of refinement involving analytical and empirical methods. Sensitivity analysis provides one such method. By identifying the factors that are most important to models, sensitivity analysis can help simplify those models; it can also identify factors that must be measured with care, leading to guidelines for better test strategy definition and application. In prior work we presented the first comprehensive economic model for the regression testing process, that captures both cost and benefit factors relevant to that process while supporting evaluation of these processes across entire system lifetimes. In this work we use sensitivity analysis to examine our model analytically and assess the factors that are most important to the model. Based on the results of that analysis, we propose two new models of increasing simplicity. We assess these models empirically on data obtained by using regression testing techniques on several non-trivial software systems. Our results show that one of the simplified models assesses the relationships between techniques in the same way as the full model.

References

  1. B. Boehm. Value-based software engineering. Softw. Eng. Notes, 28(2), Mar. 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. B. Boehm, C. Abts, A. W. Brown, S. Chulani, E. H. B. K. Clark, R. Madachy, D. Reifer, and B. Steece. Software Cost Estimation with COCOMO II. Prentice-Hall, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. H. Do, S. Elbaum, and G. Rothermel. Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Int'l. J. Emp. Softw. Eng., 10(4):405--435, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. H. Do and G. Rothermel. An empirical study of regression testing techniques incorporating context and lifetime factors and improved cost-benefit models. In Int'l. Symp. Found. Softw. Eng., pages 141--151, Nov. 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. H. Do and G. Rothermel. On the use of mutation faults in empirical assessments of test case prioritization techniques. IEEE Trans. Softw. Eng., 32(9):733--752, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. H. Do, G. Rothermel, and A. Kinneer. Prioritizing JUnit test cases: An empirical assessment and cost-benefits analysis. Emp. Softw. Eng.: An Int'l. J., 11(1):33--70, Jan. 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. S. Elbaum, D. Gable, and G. Rothermel. Understanding and measuring the sources of variation in the prioritization of regression test suites. In Int'l. Softw. Metrics Symp., pages 169--179, Apr. 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. S. Elbaum, A. G. Malishevsky, and G. Rothermel. Test case prioritization: A family of empirical studies. IEEE Trans. Softw. Eng., 28(2):159--182, Feb. 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. S. Elbaum, G. Rothermel, S. Kanduri, and A. G. Malishevsky. Selecting a cost-effective test case prioritization technique. Softw. Qual. J., 12(3):185--210, Sept. 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. B. Freimut, L. C. Briand, and F. Vollei. Determining inspection cost-effectiveness by combining project data and expert opinion. IEEE Trans. Softw. Eng., 31(12):1074--1092, Dec. 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. M. J. Harrold, D. Rosenblum, G. Rothermel, and E. Weyuker. Empirical studies of a prediction model for regression test selection. IEEE Trans. Softw. Eng., 27(3):248--263, Mar. 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. R. Johnson and D. Wichern. Applied Multivariate Statistical Analysis. Prentice Hall, 1992. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. C. Jones. Applied Software Measurement: Assuring Productivity and Quality. McGraw-Hill, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. J. Kim and A. Porter. A history-based test prioritization technique for regression testing in resource constrained environments. In Int'l. Conf. Softw. Eng., May 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. A. Kinneer, M. Dwyer, and G. Rothermel. Sofya: A flexible framework for development of dynamic program analysis for Java software. Technical Report TR-UNL-CSE-2006-0006, University of Nebraska-Lincoln, Apr. 2006.Google ScholarGoogle Scholar
  16. S. Kusumoto, K. Matsumoto, T. Kikuno, and K. Torii. A new metric for cost-effectiveness of software reviews. IEICE Trans. Info. Sys., E75-D(5):674--680, 1992.Google ScholarGoogle Scholar
  17. H. Leung and L. White. A cost model to compare regression test strategies. In Conf. Softw. Maint., Oct. 1991.Google ScholarGoogle ScholarCross RefCross Ref
  18. A. Malishevsky, G. Rothermel, and S. Elbaum. Modeling the cost-benefits tradeoffs for regression testing techniques. In Int'l. Conf. Softw. Maint., pages 204--213, Oct. 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. M. Muller and F. Padberg. About the return on investment of test-driven development. In Int'l. W. Econ.-Driven Softw. Eng. Res., May 2003.Google ScholarGoogle Scholar
  20. P. Musilek, W. Pedrycz, N. Sun, and G. Succi. On the sensitivity of COCOMO II software cost estimation model. In Int'l. Symp. Softw. Metrics, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. K. Onoma, W.-T. Tsai, M. Poonawala, and H. Suganuma. Regression testing in an industrial environment. Comm. ACM, 41(5):81--86, May 1988. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. A. Orso, N. Shi, and M. J. Harrold. Scaling regression testing to large software systems. In Int'l. Symp. Found. Softw. Eng., pages 241--251, Nov. 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. T. Ostrand and M. J. Balcer. The category-partition method for specifying and generating functional tests. Comm. ACM, 31(6):676--688, June 1988. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. J. J. Phillips. Return on Investment in Training and Performance Improvement Programs. Gulf Publishing Company, Houston, TX, 1997.Google ScholarGoogle Scholar
  25. F. L. Ramsey and D. W. Schafer. The Statistical Sleuth. Duxbury Press, 1st edition, 1997.Google ScholarGoogle Scholar
  26. G. Rodrigues, D. Rosenblum, and S. Uchitel. Sensitivity analysis for a scenario-based reliability prediction model. In W. Arch. Dep. Sys., 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. G. Rothermel, S. Elbaum, P. Kallakuri, X. Qiu, and A. G. Malishevsky. On test suite composition and cost-effective regression testing. ACM Trans. Softw. Eng. Meth., 13(3):277--331, July 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. G. Rothermel and M. J. Harrold. Analyzing regression test selection techniques. IEEE Trans. Softw. Eng., 22(8):529--551, Aug. 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. G. Rothermel and M. J. Harrold. A safe, efficient regression test selection technique. ACM Trans. Softw. Eng. Meth., 6(2):173--210, Apr. 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. A. Saltelli. Sensitivity analysis for important assessment. Risk Analysis, 22(3):579--590, 2002.Google ScholarGoogle ScholarCross RefCross Ref
  31. A. Saltelli, S. Tarantola, F. Campolongo, and M. Ratto. Sensitivity Analysis in Practice. Wiley, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. F. Shull, V. Basili, B. Boehm, A. W. Brown, P. Costa, M. Lindvall, D. Port, I. Rus, R. Tesoriero, and M. Zelkowitz. What we have learned about fighting defects. In Int'l. Softw. Metrics Symp., pages 249--258, June 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. A. Srivastava and J. Thiagarajan. Effectively prioritizing tests in development environment. In Int'l. Symp. Softw. Test. Anal., pages 97--106, July 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. T. Ostrand, E. Weyuker, and R. Bell. Predicting the location and number of faults in large software systems. IEEE Trans. Softw. Eng., 31(4):340--355, Apr. 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. T. Ostrand, E. Weyuker, and R. Bell. Automating algorithms for the identification of fault-prone files. In Int'l. Symp. Softw. Test. Rel., pages 219--227, July 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. S. Wagner. A model and sensitivity analysis of the quality economics of defect-detection techniques. In Int'l. Symp. Softw. Test. Anal., pages 73--84, July 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. S. Wagner. An approach to global sensitivity analysis: FAST on COCOMO. In Int'l. Symp. Emp. Softw. Eng. Measurement, pages 440--442, Sept. 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. W. Wakeland, R. Martin, and D. Raffo. Using design of experiments, sensitivity analysis, and hybrid simulation to evaluate changes to a software development process: A case study. In Int'l. W. Softw. Process Sim. Model., 2003.Google ScholarGoogle Scholar

Index Terms

  1. Using sensitivity analysis to create simplified economic models for regression testing

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ISSTA '08: Proceedings of the 2008 international symposium on Software testing and analysis
      July 2008
      324 pages
      ISBN:9781605580500
      DOI:10.1145/1390630

      Copyright © 2008 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 20 July 2008

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate58of213submissions,27%

      Upcoming Conference

      ISSTA '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader