ABSTRACT
This study focuses on compiling students' response-time allocated to answer correctly or wrongly, their self-regulation, as well as their satisfaction from content, in order to explain high or medium/low learning performance. To this end, it proposes a conceptual model in conjunction with research propositions. For the evaluation of the approach, an empirical study with 452 students was conducted. The fuzzy set qualitative comparative analysis (fsQCA) revealed five configurations driven by the admitted factors that explain students' high performance, as well as five additional patterns, interpreting students' medium/low performance. These findings advance our understanding of the relations between actual usage and latent behavioral factors, as well as their combined effect on students' test score. Limitations and potential implications of these findings are also discussed.
- I J M Arnold. 2016. Cheating at online formative tests: Does it pay off? Internet High Educ. 29: 98--106.Google ScholarCross Ref
- J E Beck. 2005. Engagement Tracing: Using response times to model student disengagement. In Proceedings of the 2005 Conference on Artificial Intelligence in Education: Supporting Learning Through Intelligent and Socially Informed Technology, 88--95. Google ScholarDigital Library
- J Broadbent and W L Poon. 2015. Self-regulated learning strategies & academic achievement in online higher education learning environments: A systematic review. Internet High Educ. 27: 1--13.Google ScholarCross Ref
- J Burrus, T Jackson, S Holtzman, R D Roberts, and T Mandigo. 2013. Examining the efficacy of a time management intervension for high school studetns.. ETS Research Report Series 2013, 2: i--35.Google ScholarCross Ref
- Di Challis. 2005. Committing to quality learning through adaptive online assessment. Assessment & Evaluation in Higher Education 30, 5: 519--527.Google ScholarCross Ref
- S-R Chang, B S Plake, G A Kramer, and S-M Lien. 2011. Development and application of detection indices for measuring guessing behaviors and test-taking effort in computerized adaptive testing. Educational and Psychological Measurement 71, 3: 437--459.Google ScholarCross Ref
- Y P Chua. 2012. Effects of computer-based testing on test performance and testing motivation. Computers in Human Behavior 28, 5: 1580--1586. Google ScholarDigital Library
- A Davis. 1999. The Limits of Educational Assessment. Hoboken, NJ: Wiley-Blackwell.Google Scholar
- W J Doll and G Torkzadeh. 1988. The Measurement of End-user Computing Satisfaction. MIS Q. 12, 2: 259--274. Google ScholarDigital Library
- D H Doty, W H Glick, and G P Huber. 1993. Fit, equifinality, and organizational effectiveness: A test of two configurational theories. Academy of Management journal 36, 6: 1196--1250.Google Scholar
- J Dul. 2016. Identifying single necessary conditions with NCA and fsQCA. Journal of Business Research 69, 4: 1516--1523.Google ScholarCross Ref
- B Eilam and I Aharon. 2003. Students' planning in the process of self-regulated learning. Contemporary Educational Psychology 28, 3: 304--334.Google ScholarCross Ref
- P C Fiss. 2011. Building better causal theories: A fuzzy set approach to typologies in organization research. Academy of Management Journal 54, 2: 393--420.Google ScholarCross Ref
- A R Fitzpatrick. 1983. The Meaning of Content Validity. Applied Psychological Measurement 7, 1: 3--13.Google ScholarCross Ref
- C B Hodges and C Kim. 2010. Email, self-regulation, self-efficacy, and achievement in a college online mathematics course. Journal of Educational Computing Research 43, 2: 207--223.Google ScholarCross Ref
- L F Hornke. 2000. Item response times in computerized adaptive testing. Psicológica 21, 1-2: 175--189.Google Scholar
- M N Giannakos, I O Pappas, P Mikalef and P A Pavlou. 2017. Value co-creation and trust in social commerce: An fsQCA approach. In Proceedings of the 25th European Conference on Information Systems (ECIS), 2153--2168.Google Scholar
- N Kahraman, M M Cuddy, and B E Clauser. 2013. Modeling pacing behavior and test speededness using latent growth curve models. Applied Psychological Measurement 37, 5: 343--360.Google ScholarCross Ref
- N Kim, M. J Smith, and K Maeng. 2008. Assessment in online distance education: a comparison of three online programs at a university. Online Journal of Distance Learning Administration 11, 1.Google Scholar
- A Kitsantas. 2002. Test Preparation and Performance: A Self-Regulatory Analysis. The Journal of Experimental Education 70, 2: 101--113.Google ScholarCross Ref
- M Kurucay and F A Inan. 2017. Examining the effects of learner-learner interactions on satisfaction and learning in an online undergraduate course. Comput Educ 115, C: 20--37. Google ScholarDigital Library
- Y-H Lee and S J Haberman. 2016. Investigating test-taking behaviors using timing and process data. International Journal of Testing 16, 3: 240--267.Google ScholarCross Ref
- C H Lee Y.-H. 2011. A review of recent response-time analyses in educational testing. Psychological Test and Assessment Modeling 53, 3: 359--379.Google Scholar
- W J van der Linden. 2009. A bivariate lognormal response-time model for the detection of collusion between test takers. Journal of Educational and Behavioral Statistics 34, 3: 378--394.Google ScholarCross Ref
- C C Lo. 2010. How student satisfaction factors affect perceived learning. Journal of Scholarship of Teaching and Learning 10, 1: 47--54.Google Scholar
- S C Dipboye, R L Phillips, A P Macan, Therese H. 1990. College students' time management: Correlations with academic performance and stress. Journal of Educational Psychology 82, 4: 760--768.Google ScholarCross Ref
- J M Mendel and M M Korjani. 2012. Charles Ragin's fuzzy set qualitative comparative analysis (fsQCA) used for linguistic summarizations. Information Sciences 202: 1--23. Google ScholarDigital Library
- A Ordanini, A Parasuraman, and G Rubera. 2014. When the recipe is more important than the ingredients a Qualitative Comparative Analysis (QCA) of service innovation configurations. Journal of Service Research 17, 2: 134--149.Google ScholarCross Ref
- Z Papamitsiou and A A Economides. 2017. Exhibiting achievement behavior during computer-based testing: What temporal trace data and personality traits tell us? Computers in Human Behavior 75: 423--438. Google ScholarDigital Library
- Z Papamitsiou and A A. Economides. 2013. Towards the alignment of computer-based assessment outcome with learning goals: The LAERS architecture. In 2013 IEEE Conference on e-Learning, e-Management and e-Services, IC3e 2013, 13--17.Google ScholarCross Ref
- Z K. Papamitsiou, V Terzis, and A A. Economides. 2014. Temporal learning analytics for computer based testing. In Proceedins of the Fourth International Conference on Learning Analytics And Knowledge - LAK '14, 31--35. Google ScholarDigital Library
- I O Pappas, S Papavlasopoulou, M N Giannakos, and D G Sampson. 2017. An Exploratory Study on the Influence of Cognitive and Affective Characteristics in Programming-Based Making Activities. In 2017 IEEE 17th International Conference on Advanced Learning Technologies (ICALT), 507--511.Google Scholar
- I O Pappas, M N Giannakos, L Jaccheri, and D G Sampson. 2017. Assessing Student Behavior in Computer Science Education with an fsQCA Approach: The Role of Gains and Barriers. ACM Transactions on Computing Education (TOCE) 17, 2: Article No. 10. Google ScholarDigital Library
- I O Pappas, M N Giannakos, and D G Sampson. 2016. Making sense of learning analytics with a configurational approach. In Proceedings of the workshop on Smart Environments and Analytics in Video-Based Learning (SE@ VBL), 42--52.Google Scholar
- I O Pappas, M N Giannakos, and D G Sampson. 2017. Fuzzy set analysis as a means to understand users of 21st-century learning systems: The case of mobile learning and reflections on learning analytics research. Computers in Human Behavior.Google Scholar
- M Puzziferro. 2008. Online technologies self-efficacy and self-regulated learning aspredictors of final grade and satisfaction in college-level online courses. American Journal of Distance Education 22, 2: 72--89.Google ScholarCross Ref
- C C Ragin. 2008. Redesigning social inquiry: Fuzzy sets and beyond. Wiley Online Library.Google Scholar
- C C Ragin, K A Drass, and S Davey. 2006. Fuzzy-set/qualitative comparative analysis 2.0. Tucson, Arizona: Department of Sociology, University of Arizona.Google Scholar
- B Rihoux and C C Ragin. 2009. Configurational comparative methods: Qualitative comparative analysis (QCA) and related techniques. Sage Publications, Thousand Oaks, CA.Google Scholar
- S Sergis, D G Sampson, and M Giannakos. 2017. Enhancing student digital skills: Adopting an ecosystemic school analytics approach. In 2017 IEEE 17th International Conference on Advanced Learning Technologies (ICALT), 21--25.Google ScholarCross Ref
- J C Setzer, S L Wise, J R van den Heuvel, and G Ling. 2013. An Investigation of examinee test-taking effort on a large-scale assessment. Applied Measurement in Education 26, 1: 34--49.Google ScholarCross Ref
- D Y Shee and Y-S Wang. 2008. Multi-criteria evaluation of the web-based e-learning system: A methodology based on learner satisfaction and its applications. Comput. Educ. 50, 3: 894--905. Google ScholarDigital Library
- P-C Sun, R J Tsai, G Finger, Y-Y Chen, and D Yeh. 2008. What drives a successful e-Learning? An empirical investigation of the critical factors influencing learner satisfaction. Comput. Educ. 50, 4: 1183--1202. Google ScholarDigital Library
- D L Sundre and A Kitsantas. 2004. An exploration of the psychology of the examinee: Can examinee self-regulation and test-taking motivation predict consequential and non-consequential test performance? Contemporary Educational Psychology 29, 1: 6--26.Google ScholarCross Ref
- F Tabak, N Nguyen, T Basuray, and W Darrow. 2009. Exploring the impact of personality on performance: How time-on-task moderates the mediation by self-efficacy. Personality and Individual Differences 47, 8: 823--828.Google ScholarCross Ref
- V Terzis and A A Economides. 2011. The acceptance and use of computer based assessment. Comput. Educ. 56, 4: 1032--1044. Google ScholarDigital Library
- H Wainer. 2000. Computerized adaptive testing: A Primer (2nd Edition). Mahwah, NJ: ELawrence Erlbaum Associates.Google ScholarCross Ref
- T Wang and B A Hanson. 2005. Development and calibration of an item response model that incorporates response time. Applied Psychological Measurement 29, 5: 323--339.Google ScholarCross Ref
- Y-S Wang. 2003. Assessment of learner satisfaction with asynchronous electronic learning systems. Information & Management 41, 1: 75--86. Google ScholarDigital Library
- S L Wise and X Kong. 2005. Response Time Effort: A new measure of examinee motivation in computer-based tests. Applied Measurement in Education 18, 2: 163--183.Google ScholarCross Ref
- T Wolsey. 2008. Efficacy of instructor feedback on written work in an online program. International Journal on E-Learning 7, 2: 311--329.Google Scholar
- A G Woodside. 2013. Moving beyond multiple regression analysis to algorithms: Calling for adoption of a paradigm shift from symmetric to asymmetric thinking in data analysis and crafting theory. Journal of Business Research 66, 4: 463--472.Google ScholarCross Ref
- X Xiong. Z Pardos & N Heffernan. 2011. An analysis of response time data for improving student performance prediction. In In KDD 2011 Workshop: Knowledge Discovery in Educational Data, Held as part of 17th ACM SIGKDD Conference on Knowledge Discovery and Data Mining.Google Scholar
- B J Zimmerman and M Martinez Pons. 1986. Development of a structured interview for assessing student use of self-regulated learning strategies. American Educational Research Journal 23, 4: 614--628.Google ScholarCross Ref
Index Terms
- Explaining learning performance using response-time, self-regulation and satisfaction from content: an fsQCA approach
Recommendations
Explaining consumer satisfaction of services
The services section has started to dominate economic activity in many industrialized economies since the last decade. The growth of services in Electronic Mediated Environment (EME) has changed the manner in which firms and consumers interact. Drawing ...
Perceived fit and satisfaction on web learning performance: IS continuance intention and task-technology fit perspectives
Virtual learning system (VLS) is an information system that facilitates e-learning have been widely implemented by higher education institutions to support face-to-face teaching and self-managed learning in the virtual learning and education environment ...
Explaining and predicting users' continuance intention toward e-learning: An extension of the expectation-confirmation model
Although e-learning has been prompted to various education levels, the intention to continue using such systems is still very low, and the acceptance-discontinuance anomaly phenomenon (i.e., users discontinue using e-learning after initially accepting ...
Comments