skip to main content
10.1145/3291279.3339407acmconferencesArticle/Chapter ViewAbstractPublication PagesicerConference Proceedingsconference-collections
research-article

Exploring the Value of Student Self-Evaluation in Introductory Programming

Published:30 July 2019Publication History

ABSTRACT

Programming teachers have a strong need for easy-to-use instruments that provide reliable and pedagogically useful insights into student learning. Currently, no validated tools exist for rapidly assessing student understanding of basic programming knowledge. Concept inventories and the SCS1 questionnaire can offer great benefits; this article explores the additional value that may be gained from relatively simple self-evaluation metrics. We apply a lightweight self-evaluation instrument (SEI) in an introductory programming course and compare the results to existing performance measures, such as examination grades and the SCS1. We find that the SEI has a similar correlation with a program-writing examination as the SCS1 does, although both instruments correlate only moderately with the examination and each other. Furthermore, students are much more likely to voluntarily answer the lightweight SEI than SCS1. Overall, our results suggest that both the SEI and other instruments need to be greatly improved and outline future work towards that end.

References

  1. Alireza Ahadi, Raymond Lister, Heikki Haapala, and Arto Vihavainen. 2015.Exploring Machine Learning Methods to Automatically Identify Students in Need of Assistance. In Proceedings of the Eleventh Annual International Conference on International Computing Education Research (ICER '15). ACM, New York, NY,USA, 121--130. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Satu Alaoutinen and Kari Smolander. 2010. Student Self-assessment in a Program-ming Course Using Bloom's Revised Taxonomy. In Proceedings of the Fifteenth Annual Conference on Innovation and Technology in Computer Science Education(ITiCSE '10). ACM, New York, NY, USA, 155--159. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Lorin W Anderson, David R Krathwohl, Peter W. Airasian, Kathleen A. Cruik-shank, Richard E. Mayer, Paul R. Pintrich, James Raths, and Merlin C. Wittrock. 2001. A taxonomy for learning, teaching, and assessing: A revision of Bloom's taxonomy of educational objectives(abridged ed.). Addison Wesley Longman. 302 pages.Google ScholarGoogle Scholar
  4. Albert Bandura. 1977. Self-efficacy: Toward a Unifying Theory of Behavioral Change. Pyschological Review 84, 2 (1977), 191--215.Google ScholarGoogle ScholarCross RefCross Ref
  5. Jens F. Beckmann. 2010. Taming a beast of burden - On some issues with the con-ceptualisation and operationalisation of cognitive load.Learning and Instruction20, 3 (2010), 250--264.Google ScholarGoogle Scholar
  6. Ryan Bockmon, Stephen Cooper, Jonathan Gratch, and Mohsen Dorodchi. 2019.(Re)Validating Cognitive Introductory Computing Instruments. In Proceedings ofthe 50th ACM Technical Symposium on Computer Science Education (SIGCSE '19). ACM, New York, NY, USA, 552--557. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Janet Carter, Su White, Karen Fraser, Stanislav Kurkovsky, Colette McCreesh, and Malcolm Wieck. 2010. ITiCSE 2010 Working Group Report Motivating Our Top Students. In Proceedings of the 2010 ITiCSE Working Group Reports (ITiCSE-WGR'10). ACM, New York, NY, USA, 29--47. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Ouhao Chen, Juan C. Castro-Alonso, Fred Paas, and John Sweller. 2018. Extending Cognitive Load Theory to Incorporate Working Memory Resource Depletion: Evidence from the Spacing Effect. Educational Psychology Review30, 2 (01 Jun 2018), 483--501.Google ScholarGoogle Scholar
  9. Tony Clear, Anne Philpott, Phil Robbins, and Simon. 2009. Report on the Eighth BRACElet Workshop: BRACElet Technical Report 01/08. Bulletin of Applied Computing and Information Technology 7, 1 (2009).Google ScholarGoogle Scholar
  10. Jacob Cohen. 2013. Statistical power analysis for the behavioral sciences. Routledge.Google ScholarGoogle Scholar
  11. Council of Europe. 2001. The Common European Framework of Reference for Languages: Learning, teaching, assessment. Strasbourg. https://rm.coe.int/1680459f97Google ScholarGoogle Scholar
  12. Holger Danielsiek, Laura Toma, and Jan Vahrenhold. 2017. An Instrument to Assess Self-Efficacy in Introductory Algorithms Courses. In Proceedings of the 2017 ACM Conference on International Computing Education Research (ICER '17). ACM, New York, NY, USA, 217--225. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Rodrigo Duran, Jan-Mikael Rybicki, Arto Hellas, and Sanna Suoranta. 2019. Towards a Common Instrument for Measuring Prior Programming Knowledge. In Proceedings of the 24th Annual ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE 2019). ACM, New York, NY, USA, 6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Rodrigo Duran, Juha Sorva, and Sofia Leite. 2018. Towards an Analysis of Program Complexity From a Cognitive Perspective. In Proceedings of the 2018 ACM Conference on International Computing Education Research (ICER '18). ACM, New York, NY, USA, 21--30. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. J. Feigenspan, C. Kästner, J. Liebig, S. Apel, and S. Hanenberg. 2012. Measuring programming experience. In 2012 20th IEEE International Conference on Program Comprehension (ICPC). 73--82.Google ScholarGoogle Scholar
  16. Ken Goldman, Paul Gross, Cinda Heeren, Geoffrey L. Herman, Lisa Kaczmarczyk, Michael C. Loui, and Craig Zilles. 2010. Setting the Scope of Concept Inventories for Introductory Computing Subjects. Trans. Comput. Educ.10, 2, Article 5 (June 2010), 29 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Shuchi Grover, Stephen Cooper, and Roy Pea. 2014. Assessing Computational Learning in K-12. In Proceedings of the 2014 Conference on Innovation & Technology in Computer Science Education (ITiCSE '14). ACM, New York, NY, USA,57--62. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Dianne Hagan and Selby Markham. 2000. Does It Help to Have Some Programming Experience Before Beginning a Computing Degree Program?. In Proceedings of the 5th Annual SIGCSE/SIGCUE ITiCSE conference on Innovation and Technology in Computer Science Education (ITiCSE '00). ACM, New York, NY, USA, 25--28. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Ray Hembree. 1988. Correlates, causes, effects, and treatment of test anxiety. Review of educational research58, 1 (1988), 47--77.Google ScholarGoogle Scholar
  20. Li-tze Hu and Peter M Bentler. 1999. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural equation modeling: a multidisciplinary journal 6, 1 (1999), 1--55.Google ScholarGoogle Scholar
  21. Maria Kallia and Sue Sentance. 2019. Learning to Use Functions: The Relationship Between Misconceptions and Self-Efficacy. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (SIGCSE '19). ACM, New York, NY, USA, 752--758. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Danny Kostons, Tamara van Gog, and Fred Paas. 2012. Training self-assessmentand task-selection skills: A cognitive approach to improving self-regulated learning. Learning and Instruction 22, 2 (2012), 121--132.Google ScholarGoogle ScholarCross RefCross Ref
  23. Michael J. Lee and Andrew J. Ko. 2015. Comparing the Effectiveness of Online Learning Approaches on CS1 Learning Outcomes. In Proceedings of the Eleventh Annual International Conference on International Computing Education Research(ICER '15). ACM, New York, NY, USA, 237--246. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Elizabeth A. Linnenbrink and Paul R. Pintrich. 2003. The Role of Self-Efficacy Beliefs in Student Engagement and Learning in the Classroom. Reading & Writing Quarterly 19, 2 (2003), 119--137. arXiv:https://doi.org/10.1080/10573560308223Google ScholarGoogle ScholarCross RefCross Ref
  25. Raymond Lister. 2016. Toward a Developmental Epistemology of Computer Programming. In Proceedings of the 11th Workshop in Primary and Secondary Computing Education (WiPSCE '16). ACM, New York, NY, USA, 5--16. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Mike Lopez, Jacqueline Whalley, Phil Robbins, and Raymond Lister. 2008. Relationships Between Reading, Tracing and Writing Skills in Introductory Programming. In Proceedings of the Fourth International Workshop on Computing Education Research (ICER '08). ACM, New York, NY, USA, 101--112. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Andrew Luxton-Reilly, Brett A. Becker, Yingjun Cao, Roger McDermott, Claudio Mirolo, Andreas Mühling, Andrew Petersen, Kate Sanders, Simon, and Jacqueline Whalley. 2017. Developing Assessments to Determine Mastery of Programming Fundamentals. In Proceedings of the 2017 ITiCSE Conference on Working Group Reports (ITiCSE-WGR '17). ACM, New York, NY, USA, 47--69. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Lauren Margulieux, Tuba Ayer Ketenci, and Adrienne Decker. 2019. Review of measurements used in computing education research and suggestions for increasing standardization.Computer Science Educa-tion29, 1 (2019), 49--78. arXiv:https://doi.org/10.1080/08993408.2018.1562145Google ScholarGoogle Scholar
  29. Susana Masapanta-Carrión and J. Ángel Velázquez-Iturbide. 2018. A SystematicReview of the Use of Bloom's Taxonomy in Computer Science Education. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education (SIGCSE '18). ACM, New York, NY, USA, 441--446. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Peter J. Miller. 2003. The Effect of Scoring Criteria Specificity on Peer and Self-assessment. Assessment & Evaluation in Higher Education 28, 4 (2003), 383--394. arXiv:https://doi.org/10.1080/0260293032000066218Google ScholarGoogle ScholarCross RefCross Ref
  31. Laurie Murphy and Josh Tenenberg. 2005. Do Computer Science Students KnowWhat They Know?: A Calibration Study of Data Structure Knowledge. In Pro-ceedings of the 10th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education (ITiCSE '05). ACM, New York, NY, USA, 148--152. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Greg L. Nelson, Benjamin Xie, and Andrew J. Ko. 2017. Comprehension First: Evaluating a Novel Pedagogy and Tutoring System for Program Tracing in CS1.InProceedings of the 2017 ACM Conference on International Computing Education Research (ICER '17). ACM, New York, NY, USA, 2--11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Grace Ngai, Winnie W.Y. Lau, Stephen C.F. Chan, and Hong-va Leong. 2010. Onthe Implementation of Self-assessment in an Introductory Programming Course. SIGCSE Bull.41, 4 (Jan. 2010), 85--89. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Council of Europe. 2018 (accessed April 1, 2019). Common European Framework of Reference for Languages: Learning, Teaching, Assessment. Companion Volume with New Descriptors. https://rm.coe.int/cefr-companion-volume-with-new-descriptors-2018/1680787989Google ScholarGoogle Scholar
  35. Miranda C. Parker, Mark Guzdial, and Shelly Engleman. 2016. Replication, Val-idation, and Use of a Language Independent CS1 Knowledge Assessment. InProceedings of the 2016 ACM Conference on International Computing Education Research (ICER '16). ACM, New York, NY, USA, 93--101. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Leo Porter, Cynthia Taylor, and Kevin C. Webb. 2014. Leveraging Open Source Principles for Flexible Concept Inventory Development. InProceedings of the 2014 Conference on Innovation & Technology in Computer Science Education (ITiCSE'14). ACM, New York, NY, USA, 243--248. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Marcos Román-González, Juan-Carlos Pérez-González, and Carmen Jiménez-Fernández. 2017. Which cognitive abilities underlie computational thinking? Criterion validity of the Computational Thinking Test. Computers in Human Behavior 72 (2017), 678--691. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. John A. Ross. 2006. The Reliability, Validity , and Utility of Self-Assessment. Practical assessment, research and evaluation 10 (2006), 1--13.Google ScholarGoogle Scholar
  39. Sam Saarinen, Shriram Krishnamurthi, Kathi Fisler, and Preston Tunnell Wilson.2019. Harnessing the Wisdom of the Classes: Class sourcing and Machine Learningfor Assessment Instrument Generation. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (SIGCSE '19). ACM, New York, NY,USA, 606--612. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Dale H Schunk. 1981. Modeling and attributional effects on children's achieve-ment: A self-efficacy analysis.Journal of educational psychology 73, 1 (1981), 93.Google ScholarGoogle Scholar
  41. Michael James Scott and Gheorghita Ghinea. 2014. Measuring Enrichment: The Assembly and Validation of an Instrument to Assess Student Self-beliefs in CS1. In Proceedings of the Tenth Annual Conference on International Computing Education Research (ICER '14). ACM, New York, NY, USA, 123--130. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Judy Sheard, Angela Carbone, Selby Markham, A J Hurst, Des Casey, and ChrisAvram. 2008. Performance and Progression of First Year ICT Students. In Proceedings of the Tenth Conference on Australasian Computing Education - Volume 78(ACE '08). Australian Computer Society, Inc., Darlinghurst, Australia, Australia,119--127. http://dl.acm.org/citation.cfm?id=1379249.1379261 Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. E. Soloway. 1986. Learning to Program = Learning to Construct Mechanisms and Explanations. Commun. ACM 29, 9 (Sept. 1986), 850--858. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. John Sweller. 2010. Element Interactivity and Intrinsic, Extraneous, and Germane Cognitive Load. Educational Psychology Review 22, 2 (01 Jun 2010), 123--138.Google ScholarGoogle ScholarCross RefCross Ref
  45. C. Taylor, D. Zingaro, L. Porter, K.C. Webb, C.B. Lee, and M. Clancy. 2014.Computer science concept inventories: past and future. Computer Science Education 24, 4 (2014), 253--276. arXiv:https://doi.org/10.1080/08993408.2014.970779Google ScholarGoogle ScholarCross RefCross Ref
  46. Harriet G. Taylor and Luegina C. Mounfield. 1994. Exploration of the Relationship between Prior Computing Experience and Gender on Success in College Computer Science. Journal of Educational Computing Research11, 4 (1994), 291--306. arXiv:https://doi.org/10.2190/4U0A-36XP-EU5K-H4KVGoogle ScholarGoogle ScholarCross RefCross Ref
  47. Donna Teague. 2015. Neo-Piagetian theory and the novice programmer. Ph. D. Dissertation. Queensland University of Technology.Google ScholarGoogle Scholar
  48. Donna Teague, Malcolm Corney, Alireza Ahadi, and Raymond Lister. 2013. A Qualitative Think Aloud Study of the Early Neo-piagetian Stages of Reasoningin Novice Programmers. In Proceedings of the Fifteenth Australasian Computing Education Conference - Volume 136 (ACE '13). Australian Computer Society, Inc., Darlinghurst, Australia, Australia, 87--95. http://dl.acm.org/citation.cfm?id=2667199.2667209 Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Allison Elliott Tew and Mark Guzdial. 2010. Developing a Validated Assessment of Fundamental CS1 Concepts. In Proceedings of the 41st ACM Technical Symposiumon Computer Science Education (SIGCSE '10). ACM, New York, NY, USA, 97--101. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. D. Timmermann, C. Kautz, and V. Skwarek. 2016. Evidence-based redesign ofan introductory "course programming in C". In2016 IEEE Frontiers in Education Conference (FIE). 1--5.Google ScholarGoogle Scholar
  51. Ian Utting, Allison Elliott Tew, Mike McCracken, Lynda Thomas, Dennis Bouvier, Roger Frye, James Paterson, Michael Caspersen, Yifat Ben-David Kolikant, Juha Sorva, and Tadeusz Wilusz. 2013. A Fresh Look at Novice Programmers' Performance and Their Teachers' Expectations. In Proceedings of the ITiCSE Working Group Reports Conference on Innovation and Technology in Computer Science Education-working Group Reports (ITiCSE -WGR '13). ACM, New York, NY, USA,15--32. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Arto Vihavainen, Jonne Airaksinen, and Christopher Watson. 2014. A Systematic Review of Approaches for Teaching Introductory Programming and Their Influence on Success. In Proceedings of the Tenth Annual Conference on International Computing Education Research (ICER '14). ACM, New York, NY, USA, 19--26. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Quang H. Vuong. 1989. Likelihood Ratio Tests for Model Selection and Non-Nested Hypotheses. Econometrica57, 2 (1989), 307--333.Google ScholarGoogle Scholar
  54. Christopher O. Walker, Barbara A. Greene, and Robert A. Mansell. 2006. Identifica-tion with academics, intrinsic/extrinsic motivation, and self-efficacy as predictors of cognitive engagement. Learning and Individual Differences 16, 1 (2006), 1 -- 12.Google ScholarGoogle ScholarCross RefCross Ref
  55. David Weintrop and Uri Wilensky. 2015. Using Commutative Assessments to Compare Conceptual Understanding in Blocks-based and Text-based Programs. In Proceedings of the Eleventh Annual International Conference on International Computing Education Research (ICER '15). ACM, New York, NY, USA, 101--110.https://doi.org/10.1145/2787622.2787721 Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Susan Wiedenbeck, Deborah Labelle, and Vennila NR Kain. 2004. Factors affecting course outcomes in introductory programming. In 16th Annual Workshop of the Psychology of Programming Interest Group.Google ScholarGoogle Scholar
  57. Brenda Cantwell Wilson and Sharon Shrock. 2001. Contributing to Success in an Introductory Computer Science Course: A Study of Twelve Factors. In Proceedings of the Thirty-second SIGCSE Technical Symposium on Computer Science Education (SIGCSE '01). ACM, New York, NY, USA, 184--188. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Benjamin Xie, Matthew J. Davidson, Min Li, and Andrew J. Ko. 2019. An Item Response Theory Evaluation of a Language-Independent CS1 Knowledge Assessment. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (SIGCSE '19). ACM, New York, NY, USA, 699--705. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Benjamin Xie, Dastyni Loksa, Greg L. Nelson, Matthew J. Davidson, Dongsheng Dong, Harrison Kwik, Alex Hui Tan, Leanne Hwa, Min Li, and Andrew J. Ko. 2019. A theory of instruction for introductory programming skills. Computer Science Education29, 2--3 (2019), 205--253. arXiv:https://doi.org/10.1080/08993408.2019.1565235Google ScholarGoogle Scholar
  60. Barry J Zimmerman. 2002. Becoming a Self-Regulated Learner: An Overview.Theory into practice41, 2 (2002), 64--70.Google ScholarGoogle Scholar

Index Terms

  1. Exploring the Value of Student Self-Evaluation in Introductory Programming

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          ICER '19: Proceedings of the 2019 ACM Conference on International Computing Education Research
          July 2019
          375 pages
          ISBN:9781450361859
          DOI:10.1145/3291279

          Copyright © 2019 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 30 July 2019

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          ICER '19 Paper Acceptance Rate28of137submissions,20%Overall Acceptance Rate189of803submissions,24%

          Upcoming Conference

          ICER 2024
          ACM Conference on International Computing Education Research
          August 13 - 15, 2024
          Melbourne , VIC , Australia

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader