skip to main content
article

Generation of problems, answers, grade, and feedback---case study of a fully automated tutor

Published:01 September 2005Publication History
Skip Abstract Section

Abstract

Researchers and educators have been developing tutors to help students learn by solving problems. The tutors vary in their ability to generate problems, generate answers, grade student answers, and provide feedback. At one end of the spectrum are tutors that depend on hand-coded problems, answers, and feedback. These tutors can be expected to be pedagogically effective, since all the problem-solving content is carefully hand-crafted by a teacher. However, their repertoire is limited. At the other end of the spectrum are tutors that can automatically generate problems, answers, and feedback. They have an unlimited repertoire, but it is not clear that they are effective in helping students learn. Most extant tutors lie somewhere along this spectrum.In this article we examine the feasibility of developing a tutor that can automatically generate problems, generate answers, grade student answers, and provide feedback. We investigate whether such a tutor can help students learn. For our study, we considered a tutor for our Programming Languages course, which covers static and dynamic scope (i.e., static scope of variables and procedures, dynamic scope of variables, and static and dynamic referencing environment of procedures in the context of a language that permits nested procedure definitions). The tutor generates simple and complex problems on each of these five topics, solves the problems, grades the students' answers, and provides feedback about incorrect and missed answers. Our evaluation over two semesters shows that the feedback provided by the tutor helps improve student learning.

References

  1. Anderson J. R., Corbett, A. T., Koedinger, K. R., and Pelletier, R. 1995. Cognitive tutors: Lessons learned. J. Learning Sci. 4, 2, 167--207.Google ScholarGoogle Scholar
  2. Arnow, D. and Barshay, O. 1999. WebToTeach: An interactive focused programming exercise system. In Proceedings of Frontiers in Education Conference (San Juan, Puerto Rico, Nov. 1999), session 12a9.Google ScholarGoogle Scholar
  3. Baldwin, D. 1996. Three years experience with Gateway Labs. In Proceedings of Innovation and Technology in Computer Science Education Conference (Barcelona, June 1996), ACM Press, New York, 6--7. Google ScholarGoogle Scholar
  4. Barker, D. S. 1997. A computer-managed homework, assignment and response, learning and instruction environment. In Proceedings of Frontiers in Education Conference (Pittsburgh, Penn., Nov. 1997), session S4D.Google ScholarGoogle Scholar
  5. Barrows, H. S. 1986. A taxonomy of problem-based learning methods. Medical Edu. 20, 481--486.Google ScholarGoogle Scholar
  6. Belmont, M. V., Guzman, E., Mandow, L., Millan, E., and Perez-De-La-Cruz, J. I. 2002. Automatic generation of problems in web-based tutors. In Virtual Environments for Teaching & Learning, L. C. Jain et al. (eds.), World Scientific.Google ScholarGoogle Scholar
  7. Bloom, B. S. and Krathwohl, D. R. 1956. Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook I: Cognitive Domain. Longmans, Green, New York.Google ScholarGoogle Scholar
  8. Bonar, J. and Cunningham, R. 1988. BRIDGE: Tutoring the programming process. In Intelligent Tutoring Systems: Lessons learned. J. Psotka et al. (eds), Lawrence Erlbaum Associates, Hillsdale, N.J., 409--434.Google ScholarGoogle Scholar
  9. Bridgeman, S., Goodrich, M. T., Kobourov, S. G., and Tamassia, R. 2000. PILOT: An interactive tool for learning and grading. In Proceedings of the SIGCSE Technical Symposium on Computer Science Education (Austin, Tex., Mar, 2000), ACM Press, New York, 139--143. Google ScholarGoogle Scholar
  10. Brusilovsky, P. and Su, H. 2002. Adaptive visualization component of a distributed Web-based adaptive educational system. 2002. In Proceedings of the 6th International Conference on Intelligent Tutoring Systems, (June 2002), LNCS 2363, Springer Verlag, New York, 229--238. Google ScholarGoogle Scholar
  11. Cavalcante, R., Finley, T., and Rodger, S. H. 2004. A visual and interactive automata theory course with JFLAP 4.0, In Proceedings of the 35th SIGCSE Technical Symposium on Computer Science Education (Norfolk, VA, Mar. 2004), 140--144. Google ScholarGoogle Scholar
  12. Dancik, G. and Kumar, A. N. 2003. A tutor for counter-controlled loop concepts and Its evaluation. In Proceedings of the Frontiers in Education Conference (Boulder, Colo., Nov. 2003).Google ScholarGoogle Scholar
  13. Etheredge, J. 2004. CMeRun: Program logic debugging courseware for CS 1/2 students. In Proceedings of the 35th SIGCSE Technical Symposium on Computer Science Education (Mar. 2004), 22--25. Google ScholarGoogle Scholar
  14. Fernandes, E. and Kumar, A. 2005. A tutor on subprogram implementation. J. Comput. Small Colleges 20, 4 (May), 36--46. Google ScholarGoogle Scholar
  15. Fernandes, E. and Kumar, A. 2004. A tutor on scope for the programming languages course. In Proceedings of the 35th SIGCSE Technical Symposium (Norfolk, Va., Mar. 2004), 90--95. Google ScholarGoogle Scholar
  16. Franke, R. H. and Kaul, J. D. 1978. The Hawthorne experiments: First statistical interpretation. Am. Sociological Rev. 43, 623--643.Google ScholarGoogle Scholar
  17. Hristova, M., Misra, A., Rutter, M, and Mercuri, R. 2003. Identifying and correcting Java programming errors for introductory computer science students. In Proceedings of the 34th SIGCSE Technical Symposium on Computer Science Education (Feb. 2003), 153--156. Google ScholarGoogle Scholar
  18. Johnson, W. L. 1986. Intention-Based Diagnosis of Novice Programming Errors. Morgan Kaufman. Google ScholarGoogle Scholar
  19. Kashy, E., Sherrill, B. M., Tsai, Y., Thaler, D., Weinshank, D., Engelmann, M., and Morrissey, D. J. 1993. CAPA, an integrated computer assisted personalized assignment system. Am. J. Phys. 61, 12, 1124--1130.Google ScholarGoogle Scholar
  20. Koffman, E. B. and Perry, J. M. 1976. A model for generative CAI and concept selection. Int. J. Man Mach. Stud. 8, 397--410.Google ScholarGoogle Scholar
  21. Kostadinov, R. and Kumar, A. N. 2003. A tutor for learning encapsulation in C++ classes. In Proceedings of the ED-MEDIA 2003 World Conference on Educational Multimedia, Hypermedia and Telecommunications (Honolulu, June 2003), 1311--1314.Google ScholarGoogle Scholar
  22. Krishna, A. and Kumar, A. 2001. A problem generator to learn expression evaluation in CS I and its effectiveness. J. Comput. Small Colleges 16, 4 (May 2001), 34--43. Google ScholarGoogle Scholar
  23. Kumar, A. N. 2005. Results from the evaluation of the effectiveness of an online tutor on expression evaluation. In Proceedings of the 36th SIGCSE Technical Symposium (St. Louis, Mo., Feb. 2005), 216--220. Google ScholarGoogle Scholar
  24. Kumar, A. N. 2004. Generation of demand feedback in intelligent tutors for programming. In Proceedings of the 17th Canadian Conference on Artificial Intelligence (London, Ont., May 2004). Lecture Notes in Artificial Intelligence, vol. 3060, Springer Verlag, 444--448.Google ScholarGoogle Scholar
  25. Kumar, A. N. 2002a. A tutor for using dynamic memory in C++. In Proceedings of the 2002 Frontiers in Education Conference (Boston, Mass., Nov. 2002).Google ScholarGoogle Scholar
  26. Kumar, A. N. 2002b. Model-based reasoning for domain modeling in a web-based intelligent tutoring system to help students learn to debug C++ programs. In Proceedings of the Intelligent Tutoring Systems Conference (Biarritz, June 2002), 792--801. Google ScholarGoogle Scholar
  27. Kumar, A. 2001. Learning the interaction between pointers and scope in C++, In Proceedings of the Sixth Annual Conference on Innovation and Technology in Computer Science Education (Canterbury, UK, June 2001), 45--48. Google ScholarGoogle Scholar
  28. Kumar, A. N. 2000. Dynamically generating problems on static scope. In Proceedings of the Fifth Annual Conference on Innovation and Technology in Computer Science Education (Helsinki, July 2000), 9--12. Google ScholarGoogle Scholar
  29. Kumar, A. N., Schottenfeld, O., and Obringer, S. R. 2000. Problem based learning of static referencing environment in Pascal, In Proceedings of the 16th Annual Eastern Small College Computing Conference (Scranton, Pa., Oct. 2000), 97--102.Google ScholarGoogle Scholar
  30. Odekirk-Hash, E. and Zachary, J. L. 2001. Automated feedback on programs means students need less help from teachers. In Proceedings of the 32nd SIGCSE Technical Symposium on Computer Science Education (Feb. 2001), 55--59. Google ScholarGoogle Scholar
  31. Reiser, B., Anderson, J., and Farrell, R. 1985. Dynamic student modelling in an intelligent tutor for LISP programming. In Proceedings of the International Joint Conference on Artificial Intelligence (Los Altos, Calif., Aug. 1985), 8--14.Google ScholarGoogle Scholar
  32. Shah, H. and Kumar, A. N. 2002. A tutoring system for parameter passing in programming languages. In Proceedings of the Seventh Annual Conference on Innovation and Technology in Computer Science Education (Aarhus, Denmark, June 2002), 170--174. Google ScholarGoogle Scholar
  33. Singhal N. and Kumar, A. 2000. Facilitating problem-solving on nested selection statements in C/C++. In Proceedings of Frontiers in Education Conference (Kansas City, Mo., Oct. 2000), IEEE Press. Google ScholarGoogle Scholar
  34. Weber, G. and Brusilovsky, P. 2001. ELM-ART: An adaptive versatile system for web-based instruction. Int. J. Artif. Intell. Edu. 12, 351--384.Google ScholarGoogle Scholar

Index Terms

  1. Generation of problems, answers, grade, and feedback---case study of a fully automated tutor

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader