How Flexible Is Your Data? A Comparative Analysis of Scoring Methodologies across Learning Platforms in the Context of Group Differentiation
DOI:
https://doi.org/10.18608/jla.2017.42.9Keywords:
Data flexibility, partial credit, group differentiation, resampling, ASSISTments, Cognitive TutorAbstract
Data is flexible in that it is molded by not only the features and variables available to a researcher for analysis and interpretation, but also by how those features and variables are recorded and processed prior to evaluation. “Big Data” from online learning platforms and intelligent tutoring systems is no different. The work presented herein questions the quality and flexibility of data from two popular learning platforms, comparing binary measures of problem-level accuracy, the scoring method typically used to inform learner analytics, with partial credit scoring, a more robust, real-world methodology. This work extends previous research by examining how the manipulation of scoring methodology has the potential to alter outcomes when testing hypotheses, or specifically, when looking for significant differences between groups of students. Datasets from ASSISTments and Cognitive Tutor are used to assess the implications of data availability and manipulation within twelve mathematics skills. A resampling approach is used to determine the size of equivalent samples of high- and low-performing students required to reliably differentiate performance when considering each scoring methodology. Results suggest that in eleven out of twelve observed skills, partial credit offers more efficient group differentiation, increasing analytic power and reducing Type II error. Alternative applications of this approach and implications for the Learning Analytics community are discussed.
References
Anderson, J.R., Corbett, A.T., Koedinger, K.R., & Pelletier, R. (1995). Cognitive tutor: Lesson learned. The journal of the learning sciences. 4 (2): 167–207.
Carnegie Learning. (2016). Cognitive Tutor Software. Carnegie Learning, Inc. Retrieved from https://www.carnegielearning.com/learning-solutions/software/cognitive-tutor/
Corbett, A.T., Anderson, J.R. (1995). Knowledge Tracing: Modeling the Acquisition of Procedural Knowledge. User Modeling and User-Adapted Interaction, 4: 253-278.
Desmarais, M.C. & Baker, R.S.J.d. (2011). A Review of Recent Advances in Learner and Skill Modeling in Intelligent Learning Environments. User Modeling and User-Adapted Interaction. 22 (1-2): 9-38.
Heffernan, N. & Heffernan, C. (2014). The ASSISTments Ecosystem: Building a Platform that Brings Scientists and Teachers Together for Minimally Invasive Research on Human Learning and Teaching. International Journal of AIED. 24 (4): 470-497.
KDD Cup. (2010). Rules of the KDD Cup 2010: Educational Data Mining Challenge. PSLC DataShop. Retrieved from https://pslcdatashop.web.cmu.edu/KDDCup/rules.jsp
MATLAB version R.2013.a (2013). Natick, Massachusetts: The MathWorks, Inc. Accessible at www.mathworks.com
National Governors Association Center for Best Practices (NGACBP) & Council of Chief State School Officers (CCSSO). (2010). Common Core State Standards. Washington, DC: Authors.
Ostrow, K., Donnelly, C., & Heffernan, N. (2015). Optimizing Partial Credit Algorithms to Predict Student Performance. In Santos, et al. (eds.) Proc of the 8th Int Conf on EDM. 404-407.
Ostrow, K., Donnelly, C., Adjei, S., & Heffernan, N. (2015). Improving Student Modeling Through Partial Credit and Problem Difficulty. In Russell, et al. (eds.) Proc of the 2nd ACM Conf on L@S. 11-20.
Ostrow, K. & Heffernan, C. (2016). The ASSISTments TestBed Resource Guide. Retrieved on 7/28/16 from www.assistmentstestbed.org
Ostrow, K., Heffernan, N., Heffernan, C., Peterson, Z. (2015). Blocking vs. Interleaving: Examining Single-Session Effects within Middle School Math Homework. In Conati, et al. (eds.) Proc of the 17th Int Conf on AIED. 388-347.
Pardos, Z.A. & Heffernan, N.T. (2010). Modeling Individualization in a Bayesian Networks Implementation of Knowledge Tracing. In Bra, et al. (eds.) Proc of the 18th Int Conf on UMAP. 255-266.
Pardos, Z.A., & Heffernan, N.T. (2011). KT-IDEM: Introducing Item Difficulty to the Knowledge Tracing Model. In Konstan et al. (eds.), UMAP. 6787: 243-254.
Ritter, S., Anderson, J.R., Koedinger, K.R., & Corbett, A. (2007). Cognitive Tutor: Applied research in mathematics education. Psychonomic Bulletin & Review. 14 (2): 249-255.
Stamper, J.C., Koedinger, K.R., Baker, R.S.J.d., Skogsholm, A., Leber, B., Demi, S., Yu, S., & Spencer, D. (2011). DataShop: A Data Repository and Analysis Service for the Learning Science Community. In Biswas et al. (eds.) Proc of the 15th Int Conf on AIED.
VanLehn, K., Lynch, C., Schulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., & Wintersgill, M. (2005). The Andes Physics Tutoring System: Lessons Learned. International Journal of AIED. 15 (3): 147-204.
Wang, Y. (2016). Data and Code for ‘Partial Credit Revisited: Enhancing the Efficiency and Reliability of Group Differentiation at Scale.” Accessed from http://tiny.cc/JLA_ShapeOfLearning
Wang, Y. & Heffernan, N.T. (2011). The “Assistance” Model: Leveraging How Many Hints and Attempts a Student Needs. The 24th International FLAIRS Conference.
Wang, Y. & Heffernan, N. (2013). Extending Knowledge Tracing to Allow Partial Credit: Using Continuous versus Binary Nodes. In Yacef et al. (eds.), AIED. 7926: 181-188.
Wang, Y., Ostrow, K., Beck, J., & Heffernan, N. (2016). Enhancing the Efficiency and Reliability of Group Differentiation through Partial Credit. In Gasevic, et al. (eds.) Proc of the 6th Int Conf on LAK. 454-458.
Yudelson, M.V., Koedinger, K.R., & Gordon, G.J. (2013). Individualized Bayesian Knowledge Tracing Models. In Lane, et al. (eds.) Proc of the 16th Int Conf on AIED. 171-180.
Published
How to Cite
Issue
Section
License
Copyright (c) 2017 Journal of Learning Analytics
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
TEST