skip to main content
10.1145/3170358.3170397acmotherconferencesArticle/Chapter ViewAbstractPublication PageslakConference Proceedingsconference-collections
research-article

Explaining learning performance using response-time, self-regulation and satisfaction from content: an fsQCA approach

Published:07 March 2018Publication History

ABSTRACT

This study focuses on compiling students' response-time allocated to answer correctly or wrongly, their self-regulation, as well as their satisfaction from content, in order to explain high or medium/low learning performance. To this end, it proposes a conceptual model in conjunction with research propositions. For the evaluation of the approach, an empirical study with 452 students was conducted. The fuzzy set qualitative comparative analysis (fsQCA) revealed five configurations driven by the admitted factors that explain students' high performance, as well as five additional patterns, interpreting students' medium/low performance. These findings advance our understanding of the relations between actual usage and latent behavioral factors, as well as their combined effect on students' test score. Limitations and potential implications of these findings are also discussed.

References

  1. I J M Arnold. 2016. Cheating at online formative tests: Does it pay off? Internet High Educ. 29: 98--106.Google ScholarGoogle ScholarCross RefCross Ref
  2. J E Beck. 2005. Engagement Tracing: Using response times to model student disengagement. In Proceedings of the 2005 Conference on Artificial Intelligence in Education: Supporting Learning Through Intelligent and Socially Informed Technology, 88--95. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. J Broadbent and W L Poon. 2015. Self-regulated learning strategies & academic achievement in online higher education learning environments: A systematic review. Internet High Educ. 27: 1--13.Google ScholarGoogle ScholarCross RefCross Ref
  4. J Burrus, T Jackson, S Holtzman, R D Roberts, and T Mandigo. 2013. Examining the efficacy of a time management intervension for high school studetns.. ETS Research Report Series 2013, 2: i--35.Google ScholarGoogle ScholarCross RefCross Ref
  5. Di Challis. 2005. Committing to quality learning through adaptive online assessment. Assessment & Evaluation in Higher Education 30, 5: 519--527.Google ScholarGoogle ScholarCross RefCross Ref
  6. S-R Chang, B S Plake, G A Kramer, and S-M Lien. 2011. Development and application of detection indices for measuring guessing behaviors and test-taking effort in computerized adaptive testing. Educational and Psychological Measurement 71, 3: 437--459.Google ScholarGoogle ScholarCross RefCross Ref
  7. Y P Chua. 2012. Effects of computer-based testing on test performance and testing motivation. Computers in Human Behavior 28, 5: 1580--1586. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. A Davis. 1999. The Limits of Educational Assessment. Hoboken, NJ: Wiley-Blackwell.Google ScholarGoogle Scholar
  9. W J Doll and G Torkzadeh. 1988. The Measurement of End-user Computing Satisfaction. MIS Q. 12, 2: 259--274. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. D H Doty, W H Glick, and G P Huber. 1993. Fit, equifinality, and organizational effectiveness: A test of two configurational theories. Academy of Management journal 36, 6: 1196--1250.Google ScholarGoogle Scholar
  11. J Dul. 2016. Identifying single necessary conditions with NCA and fsQCA. Journal of Business Research 69, 4: 1516--1523.Google ScholarGoogle ScholarCross RefCross Ref
  12. B Eilam and I Aharon. 2003. Students' planning in the process of self-regulated learning. Contemporary Educational Psychology 28, 3: 304--334.Google ScholarGoogle ScholarCross RefCross Ref
  13. P C Fiss. 2011. Building better causal theories: A fuzzy set approach to typologies in organization research. Academy of Management Journal 54, 2: 393--420.Google ScholarGoogle ScholarCross RefCross Ref
  14. A R Fitzpatrick. 1983. The Meaning of Content Validity. Applied Psychological Measurement 7, 1: 3--13.Google ScholarGoogle ScholarCross RefCross Ref
  15. C B Hodges and C Kim. 2010. Email, self-regulation, self-efficacy, and achievement in a college online mathematics course. Journal of Educational Computing Research 43, 2: 207--223.Google ScholarGoogle ScholarCross RefCross Ref
  16. L F Hornke. 2000. Item response times in computerized adaptive testing. Psicológica 21, 1-2: 175--189.Google ScholarGoogle Scholar
  17. M N Giannakos, I O Pappas, P Mikalef and P A Pavlou. 2017. Value co-creation and trust in social commerce: An fsQCA approach. In Proceedings of the 25th European Conference on Information Systems (ECIS), 2153--2168.Google ScholarGoogle Scholar
  18. N Kahraman, M M Cuddy, and B E Clauser. 2013. Modeling pacing behavior and test speededness using latent growth curve models. Applied Psychological Measurement 37, 5: 343--360.Google ScholarGoogle ScholarCross RefCross Ref
  19. N Kim, M. J Smith, and K Maeng. 2008. Assessment in online distance education: a comparison of three online programs at a university. Online Journal of Distance Learning Administration 11, 1.Google ScholarGoogle Scholar
  20. A Kitsantas. 2002. Test Preparation and Performance: A Self-Regulatory Analysis. The Journal of Experimental Education 70, 2: 101--113.Google ScholarGoogle ScholarCross RefCross Ref
  21. M Kurucay and F A Inan. 2017. Examining the effects of learner-learner interactions on satisfaction and learning in an online undergraduate course. Comput Educ 115, C: 20--37. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Y-H Lee and S J Haberman. 2016. Investigating test-taking behaviors using timing and process data. International Journal of Testing 16, 3: 240--267.Google ScholarGoogle ScholarCross RefCross Ref
  23. C H Lee Y.-H. 2011. A review of recent response-time analyses in educational testing. Psychological Test and Assessment Modeling 53, 3: 359--379.Google ScholarGoogle Scholar
  24. W J van der Linden. 2009. A bivariate lognormal response-time model for the detection of collusion between test takers. Journal of Educational and Behavioral Statistics 34, 3: 378--394.Google ScholarGoogle ScholarCross RefCross Ref
  25. C C Lo. 2010. How student satisfaction factors affect perceived learning. Journal of Scholarship of Teaching and Learning 10, 1: 47--54.Google ScholarGoogle Scholar
  26. S C Dipboye, R L Phillips, A P Macan, Therese H. 1990. College students' time management: Correlations with academic performance and stress. Journal of Educational Psychology 82, 4: 760--768.Google ScholarGoogle ScholarCross RefCross Ref
  27. J M Mendel and M M Korjani. 2012. Charles Ragin's fuzzy set qualitative comparative analysis (fsQCA) used for linguistic summarizations. Information Sciences 202: 1--23. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. A Ordanini, A Parasuraman, and G Rubera. 2014. When the recipe is more important than the ingredients a Qualitative Comparative Analysis (QCA) of service innovation configurations. Journal of Service Research 17, 2: 134--149.Google ScholarGoogle ScholarCross RefCross Ref
  29. Z Papamitsiou and A A Economides. 2017. Exhibiting achievement behavior during computer-based testing: What temporal trace data and personality traits tell us? Computers in Human Behavior 75: 423--438. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Z Papamitsiou and A A. Economides. 2013. Towards the alignment of computer-based assessment outcome with learning goals: The LAERS architecture. In 2013 IEEE Conference on e-Learning, e-Management and e-Services, IC3e 2013, 13--17.Google ScholarGoogle ScholarCross RefCross Ref
  31. Z K. Papamitsiou, V Terzis, and A A. Economides. 2014. Temporal learning analytics for computer based testing. In Proceedins of the Fourth International Conference on Learning Analytics And Knowledge - LAK '14, 31--35. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. I O Pappas, S Papavlasopoulou, M N Giannakos, and D G Sampson. 2017. An Exploratory Study on the Influence of Cognitive and Affective Characteristics in Programming-Based Making Activities. In 2017 IEEE 17th International Conference on Advanced Learning Technologies (ICALT), 507--511.Google ScholarGoogle Scholar
  33. I O Pappas, M N Giannakos, L Jaccheri, and D G Sampson. 2017. Assessing Student Behavior in Computer Science Education with an fsQCA Approach: The Role of Gains and Barriers. ACM Transactions on Computing Education (TOCE) 17, 2: Article No. 10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. I O Pappas, M N Giannakos, and D G Sampson. 2016. Making sense of learning analytics with a configurational approach. In Proceedings of the workshop on Smart Environments and Analytics in Video-Based Learning (SE@ VBL), 42--52.Google ScholarGoogle Scholar
  35. I O Pappas, M N Giannakos, and D G Sampson. 2017. Fuzzy set analysis as a means to understand users of 21st-century learning systems: The case of mobile learning and reflections on learning analytics research. Computers in Human Behavior.Google ScholarGoogle Scholar
  36. M Puzziferro. 2008. Online technologies self-efficacy and self-regulated learning aspredictors of final grade and satisfaction in college-level online courses. American Journal of Distance Education 22, 2: 72--89.Google ScholarGoogle ScholarCross RefCross Ref
  37. C C Ragin. 2008. Redesigning social inquiry: Fuzzy sets and beyond. Wiley Online Library.Google ScholarGoogle Scholar
  38. C C Ragin, K A Drass, and S Davey. 2006. Fuzzy-set/qualitative comparative analysis 2.0. Tucson, Arizona: Department of Sociology, University of Arizona.Google ScholarGoogle Scholar
  39. B Rihoux and C C Ragin. 2009. Configurational comparative methods: Qualitative comparative analysis (QCA) and related techniques. Sage Publications, Thousand Oaks, CA.Google ScholarGoogle Scholar
  40. S Sergis, D G Sampson, and M Giannakos. 2017. Enhancing student digital skills: Adopting an ecosystemic school analytics approach. In 2017 IEEE 17th International Conference on Advanced Learning Technologies (ICALT), 21--25.Google ScholarGoogle ScholarCross RefCross Ref
  41. J C Setzer, S L Wise, J R van den Heuvel, and G Ling. 2013. An Investigation of examinee test-taking effort on a large-scale assessment. Applied Measurement in Education 26, 1: 34--49.Google ScholarGoogle ScholarCross RefCross Ref
  42. D Y Shee and Y-S Wang. 2008. Multi-criteria evaluation of the web-based e-learning system: A methodology based on learner satisfaction and its applications. Comput. Educ. 50, 3: 894--905. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. P-C Sun, R J Tsai, G Finger, Y-Y Chen, and D Yeh. 2008. What drives a successful e-Learning? An empirical investigation of the critical factors influencing learner satisfaction. Comput. Educ. 50, 4: 1183--1202. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. D L Sundre and A Kitsantas. 2004. An exploration of the psychology of the examinee: Can examinee self-regulation and test-taking motivation predict consequential and non-consequential test performance? Contemporary Educational Psychology 29, 1: 6--26.Google ScholarGoogle ScholarCross RefCross Ref
  45. F Tabak, N Nguyen, T Basuray, and W Darrow. 2009. Exploring the impact of personality on performance: How time-on-task moderates the mediation by self-efficacy. Personality and Individual Differences 47, 8: 823--828.Google ScholarGoogle ScholarCross RefCross Ref
  46. V Terzis and A A Economides. 2011. The acceptance and use of computer based assessment. Comput. Educ. 56, 4: 1032--1044. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. H Wainer. 2000. Computerized adaptive testing: A Primer (2nd Edition). Mahwah, NJ: ELawrence Erlbaum Associates.Google ScholarGoogle ScholarCross RefCross Ref
  48. T Wang and B A Hanson. 2005. Development and calibration of an item response model that incorporates response time. Applied Psychological Measurement 29, 5: 323--339.Google ScholarGoogle ScholarCross RefCross Ref
  49. Y-S Wang. 2003. Assessment of learner satisfaction with asynchronous electronic learning systems. Information & Management 41, 1: 75--86. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. S L Wise and X Kong. 2005. Response Time Effort: A new measure of examinee motivation in computer-based tests. Applied Measurement in Education 18, 2: 163--183.Google ScholarGoogle ScholarCross RefCross Ref
  51. T Wolsey. 2008. Efficacy of instructor feedback on written work in an online program. International Journal on E-Learning 7, 2: 311--329.Google ScholarGoogle Scholar
  52. A G Woodside. 2013. Moving beyond multiple regression analysis to algorithms: Calling for adoption of a paradigm shift from symmetric to asymmetric thinking in data analysis and crafting theory. Journal of Business Research 66, 4: 463--472.Google ScholarGoogle ScholarCross RefCross Ref
  53. X Xiong. Z Pardos & N Heffernan. 2011. An analysis of response time data for improving student performance prediction. In In KDD 2011 Workshop: Knowledge Discovery in Educational Data, Held as part of 17th ACM SIGKDD Conference on Knowledge Discovery and Data Mining.Google ScholarGoogle Scholar
  54. B J Zimmerman and M Martinez Pons. 1986. Development of a structured interview for assessing student use of self-regulated learning strategies. American Educational Research Journal 23, 4: 614--628.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Explaining learning performance using response-time, self-regulation and satisfaction from content: an fsQCA approach

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      LAK '18: Proceedings of the 8th International Conference on Learning Analytics and Knowledge
      March 2018
      489 pages
      ISBN:9781450364003
      DOI:10.1145/3170358

      Copyright © 2018 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 7 March 2018

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      LAK '18 Paper Acceptance Rate35of115submissions,30%Overall Acceptance Rate236of782submissions,30%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader