skip to main content
10.1145/3059009.3059056acmconferencesArticle/Chapter ViewAbstractPublication PagesiticseConference Proceedingsconference-collections
research-article

Study Habits, Exam Performance, and Confidence: How Do Workflow Practices and Self-Efficacy Ratings Align?

Published:28 June 2017Publication History

ABSTRACT

Do students recognize the relationship between self-sufficient problem solving and exam performance? We explore this question based on log data and survey results collected over 3 semesters from 465 students who were split into cohorts based on final exam performance. Specifically, we consider three metrics: time on task, question difficulty, and self-efficacy ratings.

Our results show that, on average, median values for time on task between Low and High performing cohorts are within 16%. However, increased question difficulty revealed very different modes of spending time: when working through practice tool exercises, the High cohort regularly attempted to solve problems without assistance, whereas the Low cohort frequently requested hints during initial and subsequent attempts. Overall, when re-attempting a question that was previously attempted but incorrect, slightly over 20% of the Low cohort were able to complete the question without using hints, whereas roughly 50% of the High cohort were able to do so. Most strikingly, as the semester progressed, the average increase in confidence to solve a similar question after viewing hints was greatest for students in the Low cohort. It appears that students among the Low cohort, who went on to fail the final exam, believed that viewing solutions to problems, instead of solving the problem on their own, adequately prepared them to be able to solve similar problems without assistance in the future.

References

  1. Alireza Ahadi, Raymond Lister, Heikki Haapala, and Arto Vihavainen. 2015. Exploring Machine Learning Methods to Automatically Identify Students in Need of Assistance. In Proc. of the Conference on International Computing Education Research (ICER). 121--130. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Amjad Altadmri and Neil C.C. Brown. 2015. 37 Million Compilations: Investi- gating Novice Programming Mistakes in Large-Scale Student Data. In Proc. of the ACM Technical Symposium on Computer Science Education (SIGCSE). 522--527. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Ivon Arroyo and Beverly Park Woolf. 2005. Inferring Learning and Attitudes from a Bayesian Network of Log File Data. In Proceedings of the 2005 Conference on Artificial Intelligence in Education: Supporting Learning Through Intelligent and Socially Informed Technology. IOS Press, Amsterdam, The Netherlands, The Netherlands, 33--40. http://dl.acm.org/citation.cfm?id=1562524.1562535 Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Jason Carter, Prasun Dewan, and Mauro Pichiliani. 2015. Towards Incremental Separation of Surmountable and Insurmountable Programming Difficulties. In Proc. of the 46th ACM Technical Symposium on Computer Science Education (SIGCSE '15). ACM, New York, NY, USA, 241--246. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Mihaela Cocea and Stephan Weibelzahl. 2007. Cross-system Validation of En- gagement Prediction from Log Files. In Proceedings of the Second European Conference on Technology Enhanced Learning: Creating New Learning Experi- ences on a Global Scale (EC-TEL'07). Springer-Verlag, Berlin, Heidelberg, 14--25. http://dl.acm.org/citation.cfm?id=2394166.2394169 Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Michael Eagle and Tiffany Barnes. 2014. Exploring differences in problem solving with data-driven approach maps. In Educational Data Mining 2014.Google ScholarGoogle Scholar
  7. Stephen H. Edwards, Jason Snyder, Manuel A. Prezé-iñones, Anthony Allevato, Dongkwan Kim, and Betsy Tretola. 2009. Comparing Effective and Ineffective Behaviors of Student Programmers. In Proc. of the Workshop on Computing Education Research (ICER). 3--14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Anthony Estey and Yvonne Coady. 2016. Can Interaction Patterns with Supplemental Study Tools Predict Outcomes in CS1?. In Proc. of the ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE). 236--241. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Anthony Estey, Anna Russo Kennedy, and Yvonne Coady. 2016. BitFit: If You Build It, They Will Come!. In Proceedings of the 21st Western Canadian Conference on Computing Education (WCCCE '16). ACM, New York, NY, USA, Article 3, 6 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Eric Fouh, Daniel A. Breakiron, Sally Hamouda, Mohammed F. Farghally, and Clifford A. Shaffer. 2014. Exploring students learning behavior with an interactive etextbook in computer science courses. Computers in Human Behavior 41 (2014), 478--485. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Matthew C. Jadud. 2006. Methods and Tools for Exploring Novice Compilation Behaviour. In Proc. of International Workshop on Computing Education Research (ICER). 73--84. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Matthew C. Jadud and Brian Dorn. 2015. Aggregate Compilation Behavior: Findings and Implications from 27,698 Users. In Proc. of the Eleventh Annual International Conference on International Computing Education Research (ICER '15). 131--139. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Nate Kornell. 2009. Optimising learning using flashcards: Spacing is more effective than cramming. Applied Cognitive Psychology 23, 9 (2009), 1297--1317.Google ScholarGoogle ScholarCross RefCross Ref
  14. Jonathan P. Munson and Elizabeth A. Schilling. 2016. Analyzing Novice Programmers' Response to Compiler Error Messages. J. Comput. Sci. Coll. 31, 3 (2016), 53--61. http://dl.acm.org.ezproxy.library.uvic.ca/citation.cfm?id=2835377.2835386 Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Cindy Norris, Frank Barry, James B. Fenwick Jr., Kathryn Reid, and Josh Roun- tree. 2008. ClockIt: Collecting Quantitative Data on How Beginning Software Developers Really Work. In Proc. of the 13th Annual Conference on Innova- tion and Technology in Computer Science Education (ITiCSE '08). 37--41. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Anthony Robins, Janet Rountree, and Nathan Rountree. 2003. Learning and Teaching Programming: A Review and Discussion. Computer Science Educa- tion 13, 2 (2003), 137--172.Google ScholarGoogle Scholar
  17. Ma. Mercedes T. Rodrigo, Ryan S. Baker, Matthew C. Jadud, Anna Christine M. Amarra, Thomas Dy, Maria Beatriz V. Espejo-Lahoz, Sheryl Ann L. Lim, Sheila A.M.S. Pascua, Jessica O. Sugay, and Emily S. Tabanao. 2009. Affective and Behavioral Predictors of Novice Programmer Achievement. In Proceedings of the 14th Annual ACM SIGCSE Conference on Innovation and Technology in Computer Science Education (ITiCSE '09). ACM, New York, NY, USA, 156--160. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Jaime Spacco, Paul Denny, Brad Richards, David Babcock, David Hovemeyer, James Moscola, and Robert Duvall. 2015. Analyzing Student Work Patterns Using Programming Exercise Data. In Proc. of the 46th ACM Technical Symposium on Computer Science Education (SIGCSE '15). 18--23. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Emily S. Tabanao, Ma. Mercedes T. Rodrigo, and Matthew C. Jadud. 2011. Predicting At-risk Novice Java Programmers Through the Analysis of Online Protocols. In Proceedings of the Seventh International Workshop on Computing Education Research (ICER '11). ACM, New York, NY, USA, 85--92. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Christopher Watson, Frederick W. B. Li, and Jamie L. Godwin. 2013. Predicting Performance in an Introductory Programming Course by Logging and Analyzing Student Programming Behavior. In Proc. of the IEEE International Conference on Advanced Learning Technologies (ICALT) . 319--323. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Study Habits, Exam Performance, and Confidence: How Do Workflow Practices and Self-Efficacy Ratings Align?

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ITiCSE '17: Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education
      June 2017
      412 pages
      ISBN:9781450347044
      DOI:10.1145/3059009

      Copyright © 2017 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 28 June 2017

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      ITiCSE '17 Paper Acceptance Rate56of175submissions,32%Overall Acceptance Rate552of1,613submissions,34%

      Upcoming Conference

      ITiCSE 2024

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader