Design Analytics for Mobile Learning

Scaling up the Classification of Learning Designs Based on Cognitive and Contextual Elements

Authors

DOI:

https://doi.org/10.18608/jla.2022.7551

Keywords:

supervised machine learning, learning design, learning analytics, mobile learning, contextual learning, analytics for learning design, research paper

Abstract

This research was triggered by the identified need in literature for large-scale studies about the kinds of designs that teachers create for mobile learning (m-learning). These studies require analyses of large datasets of learning designs. The common approach followed by researchers when analyzing designs has been to manually classify them following high-level pedagogically guided coding strategies, which demands extensive work. Therefore, the first goal of this paper is to explore the use of supervised machine learning (SML) to automatically classify the textual content of m-learning designs using pedagogically relevant classifications, such as the cognitive level demanded by students to carry out specific designed tasks, the phases of inquiry learning represented in the designs, or the role that the situated environment has in the designs. Because not all SML models are transparent, but researchers often need to understand their behaviour, the second goal of this paper is to consider the trade-off between models’ performance and interpretability in the context of design analytics for m-learning. To achieve these goals, we compiled a dataset of designs deployed using two tools, Avastusrada and Smartzoos. With this dataset, we trained and compared different models and feature extraction techniques. We further optimized and compared the best performing and most interpretable algorithms (EstBERT and Logistic Regression) to consider the second goal with an illustrative case. We found that SML can reliably classify designs with accuracy > 0.86 and Cohen’s kappa > 0.69.

References

Bulathwela, S., Pe ́rez-Ortiz, M., Lipani, A., Yilmaz, E., & Shawe-Taylor, J. (2020). Predicting engagement in video lectures. In A. N. Rafferty, J. Whitehill, C. Romero, & V. Cavalli-Sforza (Eds.), Proceedings of the 13th International Conference on Educational Data Mining (EDM 2020), 10–13 July 2020, online (pp. 50–60). https://educationaldatamining.org/files/conferences/EDM2020/papers/paper 62.pdf

Chen, F., & Cui, Y. (2020). Utilizing student time series behaviour in learning management systems for early prediction of course performance. Journal of Learning Analytics, 7(2), 1–17. https://doi.org/10.18608/jla.2020.72.1

Conati, C., Porayska-Pomsta, K., & Mavrikis, M. (2018). AI in education needs interpretable machine learning: Lessons from open learner modelling. In B. Kim, K. R. Varshney, & A. Weller (Eds.), Proceedings of the 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018), 14 July 2018, Stockholm, Sweden (pp. 21–27). https://doi.org/10.48550/arXiv.1807.00154

Dalziel, J., Conole, G., Wills, S., Walker, S., Bennett, S., Dobozy, E., Cameron, L., Badilescu-Buga, E., & Bower, M. (2016). The Larnaca Declaration on Learning Design. Journal of Interactive Media in Education, (1). https://doi.org/10.5334/jime.407

De Jong, T., Gillet, D., Rodríguez-Triana, M. J., Hovardas, T., Dikke, D., Doran, R., Dziabenko, O., Koslowsky, J., Korventausta, M., Law, E., Pedaste, M., Tasiopoulou, E., Vidal, G., & Zacharia, Z. C. (2021). Understanding teacher design practices for digital inquiry-based science learning: The case of Go-Lab. Educational Technology Research and Development, 1–28. https://doi.org/10.1007/s11423-020-09904-z

Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. https://doi.org/10.48550/arXiv.1810.04805

Eagan, B. R., Rogers, B., Serlin, R., Ruis, A. R., Arastoopour Irgens, G., & Shaffer, D. W. (2017). Can we rely on IRR? Testing the assumptions of inter-rater reliability. Proceedings of the 12th International Conference on Computer Supported Collaborative Learning (CSCL 2017), 18–22 June 2017, Philadelphia, PA, USA. https://repository.isls.org/handle/1/275

Ferguson, R. (2012). Learning analytics: Drivers, developments and challenges. International Journal of Technology Enhanced Learning, 4(5-6), 304–317. https://doi.org/10.1504/IJTEL.2012.051816

Gervet, T., Koedinger, K., Schneider, J., & Mitchell, T. (2020). When is deep learning the best approach to knowledge tracing? Journal of Educational Data Mining, 12(3), 31–54. https://doi.org/10.5281/zenodo.4143614

Hernández-Leo, D., Asensio-Pérez, J. I., Derntl, M., Prieto, L. P., & Chacón, J. (2014). ILDE: Community environment for conceptualizing, authoring and deploying learning activities. In C. Rensing, S. de Freitas, T. Ley, & P. J. Muñoz-Merino (Eds.), European Conference on Technology Enhanced Learning (EC-TEL 2014), 16–19 September 2014, Graz, Austria (pp. 490–493). Springer. https://doi.org/10.1007/978-3-319-11200-8_48

Hernández-Leo, D., Martinez-Maldonado, R., Pardo, A., Muñoz-Cristóbal, J. A., & Rodríguez-Triana, M. J. (2019). Analytics for learning design: A layered framework and tools. British Journal of Educational Technology, 50(1), 139–152. https://doi.org/10.1111/bjet.12645

Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: An overview. Theory into Practice, 41(4), 212–218. https://doi.org/10.1207/s15430421tip4104_2

Macfadyen, L. P., Lockyer, L., & Rienties, B. (2020). Learning design and learning analytics: Snapshot 2020. Journal of Learning Analytics, 7(3), 6–12. https://doi.org/10.18608/jla.2020.73.2

McDonald, N., Schoenebeck, S., & Forte, A. (2019). Reliability and inter-rater reliability in qualitative research: Norms and guidelines for CSCW and HCI practice. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–23. https://doi.org/10.1145/3359174

Melero, J., Herna ́ndez-Leo, D., Sun, J., Santos, P., & Blat, J. (2015). How was the activity? A visualization support for a case of location-based learning design. British Journal of Educational Technology, 46(2), 317–329. https://doi.org/10.1111/bjet.12238

Mettis, K., & Väljataga, T. (2021). Designing learning experiences for outdoor hybrid learning spaces. British Journal of Educational Technology, 52(1), 498–513. https://doi.org/10.1111/bjet.13034

Mihaescu, M. C., & Popescu, P. S. (2021). Review on publicly available datasets for educational data mining. Wiley Interdisci- plinary Reviews: Data Mining and Knowledge Discovery, 11(3), e1403. https://doi.org/10.1002/widm.1403

Minaee, S., Kalchbrenner, N., Cambria, E., Nikzad, N., Chenaghlu, M., & Gao, J. (2021). Deep learning–based text classification: A comprehensive review. ACM Computing Surveys (CSUR), 54(3), 1–40. https://doi.org/10.1145/3439726

Mun ̃oz-Cristo ́bal, J. A., Herna ́ndez-Leo, D., Carvalho, L., Martinez-Maldonado, R., Thompson, K., Wardak, D., & Goodyear, P. (2018). 4FAD: A framework for mapping the evolution of artefacts in the learning design process. Australasian Journal of Educational Technology, 34(2). https://doi.org/10.14742/ajet.3706

Mun ̃oz-Cristo ́bal, J. A., Rodr ́ıguez-Triana, M. J., Gallego-Lema, V., Arribas-Cubero, H. F., Asensio-Pe ́rez, J. I., & Mart ́ınez- Mone ́s, A. (2018). Monitoring for awareness and reflection in ubiquitous learning environments. International Journal of Human–Computer Interaction, 34(2), 146–165. https://doi.org/10.1080/10447318.2017.1331536

Pedaste, M., Ma ̈eots, M., Siiman, L. A., De Jong, T., Van Riesen, S. A., Kamp, E. T., Manoli, C. C., Zacharia, Z. C., & Tsourlidaki, E. (2015). Phases of inquiry-based learning: Definitions and the inquiry cycle. Educational Research Review, 14, 47–61. https://doi.org/10.1016/j.edurev.2015.02.003

Pérez–Sanagustín, M., Martínez, A., & Delgado-Kloos, C. (2013). Etiquetar: Tagging learning experiences. In D. Hernández- Leo, T. Ley, R. Klamma, & A. Harrer (Eds.), European Conference on Technology Enhanced Learning (EC-TEL 2013), 17–21 September 2013, Paphos, Cyprus (pp. 573–576). Springer. https://doi.org/10.1007/978-3-642-40814-4_61

Persico, D., & Pozzi, F. (2015). Informing learning design with learning analytics to improve teacher inquiry. British Journal of Educational Technology, 46(2), 230–248. https://doi.org/10.1111/bjet.12207

Pishtari, G., Rodríguez-Triana, M. J., Sarmiento-Márquez, E. M., Pérez-Sanagustín, M., Ruiz-Calleja, A., Santos, P., P. Prieto, L., Serrano-Iglesias, S., & Väljataga, T. (2020). Learning design and learning analytics in mobile and ubiquitous learning: A systematic review. British Journal of Educational Technology, 51(4), 1078–1100. https://doi.org/10.1111/bjet.12944

Pishtari, G., & Rodríguez-Triana, M. J. (2022). An analysis of mobile learning tools in terms of pedagogical affordances and support to the learning activity life cycle. In E. Gil, Y. Mor, Y. Dimitriadis, & C. Köppe (Eds.), Hybrid learning spaces (pp. 167–183). Springer. https://doi.org/10.1007/978-3-030-88520-5_10

Pishtari, G., Rodríguez-Triana, M. J., & Väljataga, T. (2021). A multi-stakeholder perspective of analytics for learning design in location-based learning. International Journal of Mobile and Blended Learning (IJMBL), 13(1), 1–17. https://doi.org/10.4018/IJMBL.2021010101

Pishtari, G., Väljataga, T., Tammets, P., Savitski, P., Rodríguez-Triana, M. J., & Ley, T. (2017). Smartzoos: Modular open educational resources for location-based games. In É. Lavoué, H. Drachsler, K. Verbert, J. Broisin, & M. Pérez- Sanagustín (Eds.), European Conference on Technology Enhanced Learning (EC-TEL 2017), 12–15 September 2017, Tallinn, Estonia (pp. 513–516). Springer. https://doi.org/10.1007/978-3-319-66610-5_52

Pozzi, F., Asensio-Pérezc, J. I., & Persico, D. (2016). The case for multiple representations in the learning design life cycle. In B. Gros, Kinshuk, & M. Maina (Eds.), The future of ubiquitous learning (pp. 171–196). Springer. https://doi.org/10.1007/978-3-662-47724-3_10

Prieto, L. P., Pishtari, G., Rodríguez-Triana, M. J., & Eagan, B. (2021). Comparing natural language processing approaches to scale up the automated coding of diaries in single-case learning analytics. In A. R. Ruis & S. B. Lee (Eds.), Second International Conference on Quantitative Ethnography: Conference Proceedings Supplement (ICQE 2020), 1–3 February 2021, online (pp. 39–42). https://www.qesoc.org/images/pdf/ICQE20_Proceedings_Supplement_Final_web.pdf

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?” Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2016), 13–17 August 2016, San Francisco, CA, USA (pp. 1135–1144). ACM. https://doi.org/10.1145/2939672.2939778

Rodríguez-Triana, M. J., Prieto, L. P., & Pishtari, G. (2021). What do learning designs show about pedagogical adoption? An analysis approach and a case study on inquiry-based learning. In T. D. Laet, R. Klemke, C. Alario-Hoyos, I. Hilliger, & A. Ortega-Arranz (Eds.), European Conference on Technology Enhanced Learning (EC-TEL 2021), 20–24 September 2021, Bolzano, Italy (pp. 275–288). Springer. https://doi.org/10.1007/978-3-030-86436-1_21

Rudin, C., & Radin, J. (2019). Why are we using black box models in AI when we don’t need to? A lesson from an explainable AI competition. Harvard Data Science Review, 1(2). https://doi.org/10.1162/99608f92.5a8a3a3d

Santos, P., Pe ́rez-Sanagustín, M., Herna ́ndez-Leo, D., & Blat, J. (2011). QuesTInSitu: From tests to routes for assessment in situ activities. Computers & Education, 57(4), 2517–2534. https://doi.org/10.1016/j.compedu.2011.06.020

Shaffer, D. W. (2017). Quantitative ethnography. Cathcart Press. https://www.quantitativeethnography.org/

Sharples, M. (2015). Making sense of context for mobile learning. In J. Traxler & A. Kukulska-Hulme (Eds.), Mobile learning: The next generation (pp. 140–153). Taylor and Francis. https://doi.org/10.4324/9780203076095-9

Silva, V. A., Bittencourt, I. I., & Maldonado, J. C. (2019). Automatic question classifiers: A systematic review. IEEE Transactions on Learning Technologies, 12(4), 485–502. https://doi.org/10.1109/TLT.2018.2878447

Tanvir, H., Kittask, C., & Sirts, K. (2021). EstBERT: A pretrained language-specific BERT for Estonian. arXiv preprint arXiv:2011.04784. https://doi.org/10.48550/arXiv.2011.04784

Toetenel, L., & Rienties, B. (2016). Analysing 157 learning designs using learning analytic approaches as a means to evaluate the impact of pedagogical decision making. British Journal of Educational Technology, 47(5), 981–992. https://doi.org/10.1111/bjet.12423

Viera, A. J., & Garrett, J. M. (2005). Understanding interobserver agreement: The kappa statistic. Family Medicine, 37(5), 360–363. https://www.stfm.org/familymedicine/vol37issue5/Viera360

Xu, X., Wang, J., Peng, H., & Wu, R. (2019). Prediction of academic performance associated with internet usage behaviors using machine learning algorithms. Computers in Human Behavior, 98, 166–173. https://doi.org/10.1016/j.chb.2019.04.015

Downloads

Published

2022-08-31

How to Cite

Pishtari, G., Prieto, L. P., Rodríguez-Triana, M. J., & Martinez-Maldonado, R. (2022). Design Analytics for Mobile Learning: Scaling up the Classification of Learning Designs Based on Cognitive and Contextual Elements. Journal of Learning Analytics, 9(2), 236-252. https://doi.org/10.18608/jla.2022.7551

Issue

Section

Research Papers

Most read articles by the same author(s)

1 2 > >>