skip to main content
10.1145/3291279.3339416acmconferencesArticle/Chapter ViewAbstractPublication PagesicerConference Proceedingsconference-collections
research-article

Executable Examples for Programming Problem Comprehension

Published: 30 July 2019 Publication History

Abstract

Flawed problem comprehension leads students to produce flawed implementations. However, testing alone is inadequate for checking comprehension: if a student develops both their tests and implementation with the same misunderstanding, running their tests against their implementation will not reveal the issue. As a solution, some pedagogies encourage the creation of input-output examples independent of testing-but seldom provide students with any mechanism to check that their examples are correct and thorough.
We propose a mechanism that provides students with instant feedback on their examples, independent of their implementation progress. We assess the impact of such an interface on an introductory programming course and find several positive impacts, some more neutral outcomes, and no identified negative effects.

References

[1]
Kalle Aaltonen, Petri Ihantola, and Otto Seppälä. 2010. Mutation Analysis vs. CodeCoverage in Automated Assessment of Students' Testing Skills. In Proceedings of the ACM International Conference Companion on Object Oriented Programming Systems Languages and Applications Companion (OOPSLA '10). ACM, New York,NY, USA, 153--160.
[2]
Michael K. Bradshaw. 2015. Ante Up: A Framework to Strengthen Student-Based Testing of Assignments. In Proceedings of the 46th ACM Technical Symposium on Computer Science Education (SIGCSE '15). ACM, New York, NY, USA, 488--493.
[3]
Jerome Bruner. 1978. On the Mechanics of Emma. In The role of dialogue in language acquisition, Anne Sinclair, Robert J. Jarvella, and Willem J. M. Levelt(Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 241--256.
[4]
Kevin Buffardi and Stephen H. Edwards. 2015. Reconsidering Automated Feed-back: A Test-Driven Approach. In Proceedings of the 46th ACM Technical Symposium on Computer Science Education (SIGCSE '15). ACM, New York, NY, USA, 416--420.
[5]
Pyret Developers. 2016. Pyret Programming Language. http://www.pyret.org/.
[6]
Stephen H. Edwards. 2003. Improving Student Performance by Evaluating How Well Students Test Their Own Programs.J. Educ. Resour. Comput. 3, 3, Article 1(Sept. 2003), 24 pages.
[7]
Stephen H. Edwards. 2004. Using Software Testing to Move Students from Trial-and-error to Reflection-in-action. In Proceedings of the 35th SIGCSE Technical Symposium on Computer Science Education (SIGCSE '04). ACM, New York, NY,USA, 26--30.
[8]
Stephen H. Edwards. 2006. Web-CAT. https://web-cat.cs.vt.edu/.
[9]
Stephen H. Edwards and Zalia Shams. 2014. Comparing Test Quality Measuresfor Assessing Student-written Tests. In Companion Proceedings of the 36th International Conference on Software Engineering (ICSE Companion 2014). ACM, New York, NY, USA, 354--363.
[10]
Stephen H. Edwards and Zalia Shams. 2014. Do Student Programmers All Tendto Write the Same Software Tests?. In Proceedings of the 2014 Conference on Innovation & Technology in Computer Science Education (ITiCSE '14). ACM, New York, NY, USA, 171--176.
[11]
Matthias Felleisen, Robert Bruce Findler, Matthew Flatt, and Shriram Krishnamurthi. 2018.How to Design Programs(second ed.). MIT Press, Cambridge, MA, USA. http://www.htdp.org/
[12]
Matthias Felleisen, Robert Bruce Findler, Matthew Flatt, and Shriram Krishna-murthi. 2018.How to Design Programs(second ed.). MIT Press, Cambridge, MA,USA, Chapter 20, 516--522. https://htdp.org/2019-02-24/part_four.html#(part._sec 3afiles-what)
[13]
Kathi Fisler. 2014. The Recurring Rainfall Problem. In Proceedings of the Tenth Annual Conference on International Computing Education Research (ICER '14). ACM, New York, NY, USA, 35--42.
[14]
Kathi Fisler and Francisco Enrique Vicente Castro. 2017. Sometimes, Rainfall Accumulates: Talk-Alouds with Novice Functional Programmers. In Proceedings of the 2017 ACM Conference on International Computing Education Research (ICER'17). ACM, New York, NY, USA, 12--20.
[15]
Kathi Fisler, Shriram Krishnamurthi, and Janet Siegmund. 2016. Modernizing Plan-Composition Studies. In Proceedings of the 47th ACM Technical Symposiumon Computing Science Education (SIGCSE '16). ACM, New York, NY, USA, 211--216.
[16]
Emily R. Fyfe, Nicole M. McNeil, Ji Y. Son, and Robert L. Goldstone. 2014.Concreteness Fading in Mathematics and Science Instruction: a Systematic Review.Educational Psychology Review 26, 1 (01 Mar 2014), 9--25.
[17]
Gérard Huet. 1997. The Zipper. J. Funct. Program.7, 5 (Sept. 1997), 549--554.
[18]
DeAnn Huinker. 2002. Calculators as Learning Tools for Young Children's Explorations of Number. Teaching Children Mathematics 8, 6 (2002), 316--321. http://www.jstor.org/stable/41197824
[19]
Laura Inozemtseva and Reid Holmes. 2014. Coverage is Not Strongly Correlated with Test Suite Effectiveness. In Proceedings of the 36th International Conferenceon Software Engineering (ICSE 2014). ACM, New York, NY, USA, 435--445.
[20]
Ayaan M. Kazerouni, Clifford A. Shaffer, Stephen H. Edwards, and Francisco Servant. 2019. Assessing Incremental Testing Practices and Their Impact on Project Outcomes. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (SIGCSE '19). ACM, New York, NY, USA, 407--413.
[21]
Dastyni Loksa and Andrew J. Ko. 2016. The Role of Self-Regulation in Programming Problem Solving Process and Success. In Proceedings of the 2016 ACM Conference on International Computing Education Research (ICER '16). ACM, New York, NY, USA, 83--91.
[22]
Sebastian Pape, Julian Flake, Andreas Beckmann, and Jan Jürjens. 2016. STAGE: A Software Tool for Automatic Grading of Testing Exercises: Case Study Paper.In Proceedings of the 38th International Conference on Software Engineering Companion (ICSE '16). ACM, New York, NY, USA, 491--500.
[23]
Joe Gibbs Politz, Joseph M. Collard, Arjun Guha, Kathi Fisler, and Shriram Krishnamurthi. 2016. The Sweep: Essential Examples for In-Flow Peer Review. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education (SIGCSE '16). ACM, New York, NY, USA, 243--248.
[24]
Joe Gibbs Politz, Shriram Krishnamurthi, and Kathi Fisler. 2014. In-flow Peer-review of Tests in Test-first Programming. In ICER. ACM, New York, NY, USA,11--18.
[25]
James Prather, Raymond Pettit, Brett A. Becker, Paul Denny, Dastyni Loksa, Alani Peters, Zachary Albrecht, and Krista Masci. 2019. First Things First: Providing Metacognitive Scaffolding for Interpreting Problem Prompts. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (SIGCSE '19). ACM, New York, NY, USA, 531--537.
[26]
James Prather, Raymond Pettit, Kayla McMurry, Alani Peters, John Homer, and Maxine Cohen. 2018. Meta cognitive Difficulties Faced by Novice Programmers in Automated Assessment Tools. InProceedings of the 2018 ACM Conference onInternational Computing Education Research (ICER '18). ACM, New York, NY, USA, 41--50.
[27]
Charles A. Reeves. 2005. Putting fun into Functions.Teaching Children Mathematics 12, 5 (2005), 250--259. http://www.jstor.org/stable/41198729
[28]
Yanyan Ren, Shriram Krishnamurthi, and Kathi Fisler. 2019. What Help Do Students Seek in TA Office Hours?. In Proceedings of the 2019 ACM Conference on International Computing Education Research (ICER '19). ACM, New York, NY,USA, 9.
[29]
Emmanuel Schanzer, Kathi Fisler, and Shriram Krishnamurthi. 2018. Assessing Bootstrap: Algebra Students on Scaffolded and Unscaffolded Word Problems. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education(SIGCSE '18). ACM, New York, NY, USA, 8--13.
[30]
Emmanuel Schanzer, Kathi Fisler, Shriram Krishnamurthi, and Matthias Felleisen.2015. Transferring Skills at Solving Word Problems from Computing to AlgebraThrough Bootstrap. In Proceedings of the 46th ACM Technical Symposium on Computer Science Education (SIGCSE '15). ACM, New York, NY, USA, 616--621.
[31]
Vidya Setlur, Saeko Takagi, Ramesh Raskar, Michael Gleicher, and Bruce Gooch.2005. Automatic Image Retargeting. In Proceedings of the 4th International Conference on Mobile and Ubiquitous Multimedia (MUM '05). ACM, New York, NY,USA, 59--68.
[32]
John Sweller. 2006. The worked example effect and human cognition. Learning and Instruction16, 2 (2006), 165--169. Recent Worked Examples Research: Managing Cognitive Load to Foster Learning and Transfer.
[33]
Jacqueline Whalley and Nadia Kasto. 2014. A Qualitative Think-aloud Study of Novice Programmers' Code Writing Strategies. In Proceedings of the 2014 Conference on Innovation & Technology in Computer Science Education (ITiCSE'14). ACM, New York, NY, USA, 279--284.
[34]
John Wrenn and Shriram Krishnamurthi. 2017. Error Messages Are Classifiers: A Process to Design and Evaluate Error Messages. In Proceedings of the 2017 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (Onward! 2017). ACM, New York, NY, USA, 134--147.
[35]
John Wrenn, Shriram Krishnamurthi, and Kathi Fisler. 2018. Who Tests the Testers?. In Proceedings of the 2018 ACM Conference on International Computing Education Research (ICER '18). ACM, New York, NY, USA, 51--59.

Cited By

View all
  • (2024)Refute Questions for Concrete, Cluttered SpecificationsProceedings of the 2024 ACM Conference on International Computing Education Research - Volume 210.1145/3632621.3671427(539-540)Online publication date: 12-Aug-2024
  • (2024)Probeable Problems for Beginner-level Programming-with-AI ContestsProceedings of the 2024 ACM Conference on International Computing Education Research - Volume 110.1145/3632620.3671108(166-176)Online publication date: 12-Aug-2024
  • (2024)Regulatory Strategies for Novice Programming StudentsComputer Supported Education10.1007/978-3-031-53656-4_7(136-159)Online publication date: 15-Feb-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICER '19: Proceedings of the 2019 ACM Conference on International Computing Education Research
July 2019
375 pages
ISBN:9781450361859
DOI:10.1145/3291279
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 30 July 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. automated assessment
  2. examplar
  3. examples
  4. testing

Qualifiers

  • Research-article

Conference

ICER '19
Sponsor:

Acceptance Rates

ICER '19 Paper Acceptance Rate 28 of 137 submissions, 20%;
Overall Acceptance Rate 189 of 803 submissions, 24%

Upcoming Conference

ICER 2025
ACM Conference on International Computing Education Research
August 3 - 6, 2025
Charlottesville , VA , USA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)64
  • Downloads (Last 6 weeks)8
Reflects downloads up to 20 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Refute Questions for Concrete, Cluttered SpecificationsProceedings of the 2024 ACM Conference on International Computing Education Research - Volume 210.1145/3632621.3671427(539-540)Online publication date: 12-Aug-2024
  • (2024)Probeable Problems for Beginner-level Programming-with-AI ContestsProceedings of the 2024 ACM Conference on International Computing Education Research - Volume 110.1145/3632620.3671108(166-176)Online publication date: 12-Aug-2024
  • (2024)Regulatory Strategies for Novice Programming StudentsComputer Supported Education10.1007/978-3-031-53656-4_7(136-159)Online publication date: 15-Feb-2024
  • (2023)ProofBuddy: A Proof Assistant for Learning and MonitoringElectronic Proceedings in Theoretical Computer Science10.4204/EPTCS.382.1382(1-21)Online publication date: 14-Aug-2023
  • (2023)A Model of How Students Engineer Test Cases With FeedbackACM Transactions on Computing Education10.1145/362860424:1(1-31)Online publication date: 20-Oct-2023
  • (2023)Creating Thorough Tests for AI-Generated Code is HardProceedings of the 16th Annual ACM India Compute Conference10.1145/3627217.3627238(108-111)Online publication date: 9-Dec-2023
  • (2023)Bug-eecha 2.0: An Educational Game for CS1 Students and InstructorsProceedings of the 16th Annual ACM India Compute Conference10.1145/3627217.3627236(61-65)Online publication date: 9-Dec-2023
  • (2023)GuardRails: Automated Suggestions for Clarifying Ambiguous Purpose StatementsProceedings of the 16th Annual ACM India Compute Conference10.1145/3627217.3627234(55-60)Online publication date: 9-Dec-2023
  • (2023)Evaluating the Quality of LLM-Generated Explanations for Logical Errors in CS1 Student ProgramsProceedings of the 16th Annual ACM India Compute Conference10.1145/3627217.3627233(49-54)Online publication date: 9-Dec-2023
  • (2023)A Bug's New Life: Creating Refute Questions from Filtered CS1 Student Code SnapshotsProceedings of the ACM Conference on Global Computing Education Vol 110.1145/3576882.3617916(7-14)Online publication date: 5-Dec-2023
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media