skip to main content
10.1145/2591708.2591757acmconferencesArticle/Chapter ViewAbstractPublication PagesiticseConference Proceedingsconference-collections
research-article

Do student programmers all tend to write the same software tests?

Published:21 June 2014Publication History

ABSTRACT

While many educators have added software testing practices to their programming assignments, assessing the effectiveness of student-written tests using statement coverage or branch coverage has limitations. While researchers have begun investigating alternative approaches to assessing student-written tests, this paper reports on an investigation of the quality of student written tests in terms of the number of authentic, human-written defects those tests can detect. An experiment was conducted using 101 programs written for a CS2 data structures assignment where students implemented a queue two ways, using both an array-based and a link-based representation. Students were required to write their own software tests and graded in part on the branch coverage they achieved. Using techniques from prior work, we were able to approximate the number of bugs present in the collection of student solutions, and identify which of these were detected by each student-written test suite. The results indicate that, while students achieved an average branch coverage of 95.4% on their own solutions, their test suites were only able to detect an average of 13.6% of the faults present in the entire program population. Further, there was a high degree of similarity among 90% of the student test suites. Analysis of the suites suggest that students were following naïve, "happy path" testing, writing basic test cases covering mainstream expected behavior rather than writing tests designed to detect hidden bugs. These results suggest that educators should strive to reinforce test design techniques intended to find bugs, rather than simply confirming that features work as expected.

References

  1. K. Aaltonen, P. Ihantola, and O. Seppälä. 2010. Mutation analysis vs. code coverage in automated assessment of students' testing skills. In Proc. ACM Int'l Conf. on Object Oriented Prog. Sys. Languages and Applications Companion (SPLASH '10). ACM Press, pp. 153--160. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. S.H. Edwards. 2003. Improving student performance by evaluating how well students test their own programs. Journal on Educational Resources in Computing, 3(3): Article 1. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. S.H. Edwards. 2004. Using software testing to move students from trial-and-error to reflection-in-action. In Proc. 35th SIGCSE Tech. Symp. Computer Science Education. ACM Press, pp. 26--30. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. S.H. Edwards and M.A. Pérez-Quiñones. 2007. Experiences using test-driven development with an automated grader. J. Comput. Small Coll., 22(3): 44--50. January 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. S.H. Edwards, Z. Shams, M. Cogswell, and R.C. Senkbeil. 2012. Running students' software tests against each others' code: New life for an old "gimmick". In Proc. 43rd ACM Tech. Symp. Computer Science Education. ACM Press, pp. 221--226. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. M.H. Goldwasser. 2002. A gimmick to integrate software testing throughout the curriculum. In Proc. 33rd SIGCSE Tech. Symp. Computer Science Education. ACM Press, pp. 271--275. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. D. Jackson and M. Usher. 1997. Grading student programs using ASSYST. In Proc. 28th SIGCSE Tech. Symp. Computer Science Education. ACM Press, pp. 335--339. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. E.L. Jones. 2000. Software testing in the computer science curriculum-a holistic approach. In Proc. Australasian Computing Education Conf. ACM Press, pp. 153--157. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. E.L. Jones. 2001. Integrating testing into the curriculum-arsenic in small doses. In Proc. 32nd SIGCSE Tech. Symp. Computer Science Education. ACM Press, pp. 337--341. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. G.J. Myers, C. Sandler, and T. Badgett. 2011. The Art of Software Testing, 3rd Ed. Wiley. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Z. Shams and S.H. Edwards. 2013. Toward practical mutation analysis for evaluating the quality of student-written software tests. In Proc. 9th Ann. Int'l ACM Conf. Int'l Computing Education Research. ACM Press, pp. 53--58. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. J. Spacco and W. Pugh. 2006. Helping students appreciate test-driven development (TDD). In Companion to the 21st ACM SIGPLAN Symp. Object-oriented Prog. Sys, Languages, and Applications. ACM Press, pp. 907--913. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Do student programmers all tend to write the same software tests?

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          ITiCSE '14: Proceedings of the 2014 conference on Innovation & technology in computer science education
          June 2014
          378 pages
          ISBN:9781450328333
          DOI:10.1145/2591708

          Copyright © 2014 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 21 June 2014

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          ITiCSE '14 Paper Acceptance Rate36of164submissions,22%Overall Acceptance Rate552of1,613submissions,34%

          Upcoming Conference

          ITiCSE 2024

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader