ABSTRACT
Do students recognize the relationship between self-sufficient problem solving and exam performance? We explore this question based on log data and survey results collected over 3 semesters from 465 students who were split into cohorts based on final exam performance. Specifically, we consider three metrics: time on task, question difficulty, and self-efficacy ratings.
Our results show that, on average, median values for time on task between Low and High performing cohorts are within 16%. However, increased question difficulty revealed very different modes of spending time: when working through practice tool exercises, the High cohort regularly attempted to solve problems without assistance, whereas the Low cohort frequently requested hints during initial and subsequent attempts. Overall, when re-attempting a question that was previously attempted but incorrect, slightly over 20% of the Low cohort were able to complete the question without using hints, whereas roughly 50% of the High cohort were able to do so. Most strikingly, as the semester progressed, the average increase in confidence to solve a similar question after viewing hints was greatest for students in the Low cohort. It appears that students among the Low cohort, who went on to fail the final exam, believed that viewing solutions to problems, instead of solving the problem on their own, adequately prepared them to be able to solve similar problems without assistance in the future.
- Alireza Ahadi, Raymond Lister, Heikki Haapala, and Arto Vihavainen. 2015. Exploring Machine Learning Methods to Automatically Identify Students in Need of Assistance. In Proc. of the Conference on International Computing Education Research (ICER). 121--130. Google ScholarDigital Library
- Amjad Altadmri and Neil C.C. Brown. 2015. 37 Million Compilations: Investi- gating Novice Programming Mistakes in Large-Scale Student Data. In Proc. of the ACM Technical Symposium on Computer Science Education (SIGCSE). 522--527. Google ScholarDigital Library
- Ivon Arroyo and Beverly Park Woolf. 2005. Inferring Learning and Attitudes from a Bayesian Network of Log File Data. In Proceedings of the 2005 Conference on Artificial Intelligence in Education: Supporting Learning Through Intelligent and Socially Informed Technology. IOS Press, Amsterdam, The Netherlands, The Netherlands, 33--40. http://dl.acm.org/citation.cfm?id=1562524.1562535 Google ScholarDigital Library
- Jason Carter, Prasun Dewan, and Mauro Pichiliani. 2015. Towards Incremental Separation of Surmountable and Insurmountable Programming Difficulties. In Proc. of the 46th ACM Technical Symposium on Computer Science Education (SIGCSE '15). ACM, New York, NY, USA, 241--246. Google ScholarDigital Library
- Mihaela Cocea and Stephan Weibelzahl. 2007. Cross-system Validation of En- gagement Prediction from Log Files. In Proceedings of the Second European Conference on Technology Enhanced Learning: Creating New Learning Experi- ences on a Global Scale (EC-TEL'07). Springer-Verlag, Berlin, Heidelberg, 14--25. http://dl.acm.org/citation.cfm?id=2394166.2394169 Google ScholarDigital Library
- Michael Eagle and Tiffany Barnes. 2014. Exploring differences in problem solving with data-driven approach maps. In Educational Data Mining 2014.Google Scholar
- Stephen H. Edwards, Jason Snyder, Manuel A. Prezé-iñones, Anthony Allevato, Dongkwan Kim, and Betsy Tretola. 2009. Comparing Effective and Ineffective Behaviors of Student Programmers. In Proc. of the Workshop on Computing Education Research (ICER). 3--14. Google ScholarDigital Library
- Anthony Estey and Yvonne Coady. 2016. Can Interaction Patterns with Supplemental Study Tools Predict Outcomes in CS1?. In Proc. of the ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE). 236--241. Google ScholarDigital Library
- Anthony Estey, Anna Russo Kennedy, and Yvonne Coady. 2016. BitFit: If You Build It, They Will Come!. In Proceedings of the 21st Western Canadian Conference on Computing Education (WCCCE '16). ACM, New York, NY, USA, Article 3, 6 pages. Google ScholarDigital Library
- Eric Fouh, Daniel A. Breakiron, Sally Hamouda, Mohammed F. Farghally, and Clifford A. Shaffer. 2014. Exploring students learning behavior with an interactive etextbook in computer science courses. Computers in Human Behavior 41 (2014), 478--485. Google ScholarDigital Library
- Matthew C. Jadud. 2006. Methods and Tools for Exploring Novice Compilation Behaviour. In Proc. of International Workshop on Computing Education Research (ICER). 73--84. Google ScholarDigital Library
- Matthew C. Jadud and Brian Dorn. 2015. Aggregate Compilation Behavior: Findings and Implications from 27,698 Users. In Proc. of the Eleventh Annual International Conference on International Computing Education Research (ICER '15). 131--139. Google ScholarDigital Library
- Nate Kornell. 2009. Optimising learning using flashcards: Spacing is more effective than cramming. Applied Cognitive Psychology 23, 9 (2009), 1297--1317.Google ScholarCross Ref
- Jonathan P. Munson and Elizabeth A. Schilling. 2016. Analyzing Novice Programmers' Response to Compiler Error Messages. J. Comput. Sci. Coll. 31, 3 (2016), 53--61. http://dl.acm.org.ezproxy.library.uvic.ca/citation.cfm?id=2835377.2835386 Google ScholarDigital Library
- Cindy Norris, Frank Barry, James B. Fenwick Jr., Kathryn Reid, and Josh Roun- tree. 2008. ClockIt: Collecting Quantitative Data on How Beginning Software Developers Really Work. In Proc. of the 13th Annual Conference on Innova- tion and Technology in Computer Science Education (ITiCSE '08). 37--41. Google ScholarDigital Library
- Anthony Robins, Janet Rountree, and Nathan Rountree. 2003. Learning and Teaching Programming: A Review and Discussion. Computer Science Educa- tion 13, 2 (2003), 137--172.Google Scholar
- Ma. Mercedes T. Rodrigo, Ryan S. Baker, Matthew C. Jadud, Anna Christine M. Amarra, Thomas Dy, Maria Beatriz V. Espejo-Lahoz, Sheryl Ann L. Lim, Sheila A.M.S. Pascua, Jessica O. Sugay, and Emily S. Tabanao. 2009. Affective and Behavioral Predictors of Novice Programmer Achievement. In Proceedings of the 14th Annual ACM SIGCSE Conference on Innovation and Technology in Computer Science Education (ITiCSE '09). ACM, New York, NY, USA, 156--160. Google ScholarDigital Library
- Jaime Spacco, Paul Denny, Brad Richards, David Babcock, David Hovemeyer, James Moscola, and Robert Duvall. 2015. Analyzing Student Work Patterns Using Programming Exercise Data. In Proc. of the 46th ACM Technical Symposium on Computer Science Education (SIGCSE '15). 18--23. Google ScholarDigital Library
- Emily S. Tabanao, Ma. Mercedes T. Rodrigo, and Matthew C. Jadud. 2011. Predicting At-risk Novice Java Programmers Through the Analysis of Online Protocols. In Proceedings of the Seventh International Workshop on Computing Education Research (ICER '11). ACM, New York, NY, USA, 85--92. Google ScholarDigital Library
- Christopher Watson, Frederick W. B. Li, and Jamie L. Godwin. 2013. Predicting Performance in an Introductory Programming Course by Logging and Analyzing Student Programming Behavior. In Proc. of the IEEE International Conference on Advanced Learning Technologies (ICALT) . 319--323. Google ScholarDigital Library
Index Terms
- Study Habits, Exam Performance, and Confidence: How Do Workflow Practices and Self-Efficacy Ratings Align?
Comments