skip to main content
article

Building an infrastructure to support experimentation with software testing techniques

Published:01 September 2004Publication History
Skip Abstract Section

Abstract

Experimentation is necessary to provide advances in research on software testing, but without infrastructure to support that experimentation, progress cannot occur. To help with this problem, we have been designing and constructing infrastructure to support controlled experimentation with software testing techniques. This position paper describes our efforts, and the challenges faced in creating infrastructure and making it available.

References

  1. A. Coen-Porisini, G. Denaro, C. Ghezzi, and P. Pezze. Using symbolic execution for verifying safety-critical systems. In Proc. ESEC/FSE, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. H. Do, G. Rothermel, and S. Elbaum. Infrastructure support for controlled experimentation with testing and regression testing techniques. Technical Report 04-60-01, Oregon State University, Jan. 2004.Google ScholarGoogle Scholar
  3. S. Elbaum, D. Gable, and G. Rothermel. The Impact of Software Evolution on Code Coverage. In Int'l. Conf. Softw. Maint., pages 169--179, Nov. 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. S. Elbaum, P. Kallakuri, A. Malishevsky, R. Rothermel, and S. Kanduri. Understanding the effects of changes on the cost-effectiveness of regression testing techniques. J. Softw. Testing, Verif., and Rel., 12(2), 2003.Google ScholarGoogle Scholar
  5. M. Ernst, A. Czeisler, W. Griswold, and D. Notkin. Quickly detecting relevant program invariants. In Int'l. Conf. Softw. Eng., June 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. M. Harder and M. Ernst. Improving test suites via operational abstraction. In Int'l. Conf. Softw. Eng., May 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. M. Hutchins, H. Foster, T. Goradia, and T. Ostrand. Experiments on the effectiveness of dataflow- and controlflow-based test adequacy criteria. In Int'l. Conf. Softw. Eng., pages 191--200, May 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. J. Kim and A. Porter. A history-based test prioritization technique for regression testing in resource constrained environments. In Int'l. Conf. Softw. Eng., May 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. A. G. Malishevsky, G. Rothermel, and S. Elbaum. Modeling the cost-benefits tradeoffs for regression testing techniques. In Int'l. Conf. Softw. Maint., pages 230--240, Oct. 2002.Google ScholarGoogle Scholar
  10. M. Marre and A. Bertolino. Using spanning sets for coverage testing. IEEE Trans. Softw. Eng., 29(11), Nov. 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. V. Okun, P. Black, and Y. Yesha. Testing with model checkers: insuring fault visibility. WSEAS Trans. Sys., 2(1):77--82, Jan. 2003.Google ScholarGoogle Scholar
  12. T. Ostrand and M. Balcer. The category-partition method for specifying and generating functional tests. Comm. ACM, 31(6), June 1988. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. G. Rothermel, S. Elbaum, A. Malishevsky, P. Kallakuri, and B. Davia. The impact of test suite granularity on the cost-effectiveness of regression testing. In Int'l. Conf. Softw. Eng., May 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. T. Xie and D. Notkin. Macro and micro perspectives on strategic software quality assurance in resource constrained environments. In Proc. EDSER-4, May 2002.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in

Full Access

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader