skip to main content
10.1145/3051457.3051468acmconferencesArticle/Chapter ViewAbstractPublication Pagesl-at-sConference Proceedingsconference-collections
research-article

A Visual Approach towards Knowledge Engineering and Understanding How Students Learn in Complex Environments

Published: 12 April 2017 Publication History

Abstract

Exploratory learning environments, such as virtual labs, support divergent learning pathways. However, due to their complexity, building computational models of learning is challenging as it is difficult to identify features that (i) are informative with respect to common learning strategies, (ii) abstract similar actions beyond surface differences, and (iii) differentiate groups of learners. In this paper, we present a visualization tool that addresses these challenges by facilitating a novel analytic approach to aid in the knowledge engineering process, focusing on five main capabilities: data-driven hypotheses raising, visualizing behavior over time, easily grouping related actions, contrasting learners' behaviors on these actions, and comparing the behaviors of groups of learners. We apply this analytic approach to better understand how students work with a popular interactive physics virtual lab. By splitting learners by learning gains, we found that productive learners performed more active testing and adapted more quickly to the task at hand by focusing on more relevant testing instruments. Implications for online virtual labs and a broader class of complex learning environments are discussed throughout.

References

[1]
2016. PhET Interactive Virtual Labs for Science and Math. (2016). https://phet.colorado.edu/
[2]
Vincent Aleven, Ido Roll, Bruce M McLaren, and Kenneth R Koedinger. 2016. Help helps, but only so much: research on help seeking with intelligent tutoring systems. International Journal of Artificial Intelligence in Education 26, 1 (2016), 205--223.
[3]
Ryan S.J.d. Baker and Jody Clarke-Midura. 2013. Predicting successful inquiry learning in a virtual performance assessment for science. In International Conference on User Modeling, Adaptation, and Personalization. Springer, 203--214.
[4]
Ryan S.J.d. Baker, Albert T Corbett, Ido Roll, and Kenneth R Koedinger. 2008. Developing a generalizable detector of when students game the system. User Modeling and User-Adapted Interaction 18, 3 (2008), 287--314.
[5]
Michael Bostock. 2015. Visualizations with D3. (2015).
[6]
Michael Brooks, Saleema Amershi, Bongshin Lee, Steven M Drucker, Ashish Kapoor, and Patrice Simard. 2015. FeatureInsight: Visual support for error-driven feature ideation in text classification. In Visual Analytics Science and Technology (VAST), 2015 IEEE Conference on. IEEE, 105--112.
[7]
Bodong Chen, Alyssa F. Wise, Simon Knight, and Britte Haugan Cheng. 2016. Putting Temporal Analytics into Practice: The 5th International Workshop on Temporality in Learning Data. In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16). ACM, New York, NY, USA, 488--489.
[8]
Cristina Conati, Lauren Fratamico, Samad Kardan, and Ido Roll. 2015. Comparing representations for learner models in interactive simulations. In International Conference on Artificial Intelligence in Education. Springer, 74--83.
[9]
Diansheng Guo. 2003. Coordinating computational and visual approaches for interactive feature selection and multivariate clustering. Information Visualization 2, 4 (2003), 232--246.
[10]
Hogyeong Jeong, Amit Gupta, Rod Roscoe, John Wagster, Gautam Biswas, and Daniel Schwartz. 2008. Using hidden Markov models to characterize student behaviors in learning-by-teaching environments. In International Conference on Intelligent Tutoring Systems. Springer, 614--625.
[11]
Josua Krause, Adam Perer, and Enrico Bertini. 2014. INFUSE: interactive feature selection for predictive modeling of high dimensional data. IEEE transactions on visualization and computer graphics 20, 12 (2014), 1614--1623.
[12]
Heidi Lam, Daniel Russell, Diane Tang, and Tamara Munzner. 2007. Session viewer: Visual exploratory analysis of web session logs. In Visual Analytics Science and Technology, 2007. VAST 2007. IEEE Symposium on. IEEE, 147--154.
[13]
Zhicheng Liu, Yang Wang, Mira Dontcheva, Matthew Hoffman, Seth Walker, and Alan Wilson. 2017. Patterns and sequences: Interactive exploration of clickstreams to understand common visitor paths. IEEE Transactions on Visualization and Computer Graphics 23, 1 (2017), 321--330.
[14]
Sana Malik, Ben Shneiderman, Fan Du, Catherine Plaisant, and Margret Bjarnadottir. 2016. High-Volume Hypothesis Testing: Systematic Exploration of Event Sequence Comparisons. ACM Transactions on Interactive Intelligent Systems (TiiS) 6, 1 (2016), 9. 15. Jeffrey D Marx and Karen Cummings. 2007. Normalized change. American Journal of Physics 75, 1 (2007), 87--91.
[15]
Megan Monroe, Rongjian Lan, Hanseung Lee, Catherine Plaisant, and Ben Shneiderman. 2013. Temporal event sequence simplification. IEEE transactions on visualization and computer graphics 19, 12 (2013), 2227--2236.
[16]
T. Powell. 2004. JavaScript: The Complete Reference (2 ed.). McGraw-Hill, New York, NY, USA.
[17]
Ido Roll, Ryan S.J.d. Baker, Vincent Aleven, and Kenneth R Koedinger. 2014. On the benefits of seeking (and avoiding) help in online problem-solving environments. Journal of the Learning Sciences 23, 4 (2014), 537--560.
[18]
Ido Roll, Ryan S.J.d. Baker, Vincent Aleven, Bruce M McLaren, and Kenneth R Koedinger. 2005. Modeling students' metacognitive errors in two intelligent tutoring systems. In International Conference on User Modeling. Springer, 367--376.
[19]
Ido Roll, N Yee, and A Cervantes. 2014. Not a magic bullet: the effect of scaffolding on knowledge and attitudes in online simulations. In International Conference of the Learning Sciences.
[20]
Michael A Sao Pedro, Ryan S.J.d. Baker, Janice D Gobert, Orlando Montalvo, and Adam Nakama. 2013. Leveraging machine-learned detectors of systematic inquiry behavior to estimate and predict transfer of inquiry skill. User Modeling and User-Adapted Interaction 23, 1 (2013), 1--39.
[21]
Ben Shneiderman. 1996. The Eyes Have It: A Task by Data Type Taxonomy for Information Visualizations (VL '96). IEEE Computer Society, 336--. 23. Jing Yang, Wei Peng, Matthew O Ward, and Elke A Rundensteiner. 2003. Interactive hierarchical dimension ordering, spacing and filtering for exploration of high dimensional datasets. In Information Visualization, 2003. INFOVIS 2003. IEEE Symposium on. IEEE, 105--112.
[22]
Jian Zhao, Zhicheng Liu, Mira Dontcheva, Aaron Hertzmann, and Alan Wilson. 2015. MatrixWave: Visual comparison of event sequence data. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 259--268.

Cited By

View all
  • (2021)The output capacity of college testing center studyPROCEEDINGS OF THE SCIENTIFIC CONFERENCE ON RAILWAY TRANSPORT AND ENGINEERING (RTE 2021)10.1063/5.0064427(100010)Online publication date: 2021
  • (2018)Students, systems, and interactionsProceedings of the Fifth Annual ACM Conference on Learning at Scale10.1145/3231644.3231662(1-10)Online publication date: 26-Jun-2018

Index Terms

  1. A Visual Approach towards Knowledge Engineering and Understanding How Students Learn in Complex Environments

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      L@S '17: Proceedings of the Fourth (2017) ACM Conference on Learning @ Scale
      April 2017
      352 pages
      ISBN:9781450344500
      DOI:10.1145/3051457
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 12 April 2017

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. educational data mining
      2. exploratory data analysis
      3. exploratory learning environments
      4. interactive virtual labs
      5. learning analytics
      6. learning strategies
      7. temporal data
      8. visual analytics

      Qualifiers

      • Research-article

      Conference

      L@S 2017
      Sponsor:
      L@S 2017: Fourth (2017) ACM Conference on Learning @ Scale
      April 20 - 21, 2017
      Massachusetts, Cambridge, USA

      Acceptance Rates

      L@S '17 Paper Acceptance Rate 14 of 105 submissions, 13%;
      Overall Acceptance Rate 117 of 440 submissions, 27%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 20 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2021)The output capacity of college testing center studyPROCEEDINGS OF THE SCIENTIFIC CONFERENCE ON RAILWAY TRANSPORT AND ENGINEERING (RTE 2021)10.1063/5.0064427(100010)Online publication date: 2021
      • (2018)Students, systems, and interactionsProceedings of the Fifth Annual ACM Conference on Learning at Scale10.1145/3231644.3231662(1-10)Online publication date: 26-Jun-2018

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media