Journal of Learning Analytics
https://learning-analytics.info/index.php/JLA
<p>The Journal of Learning Analytics is a peer-reviewed, open-access journal, disseminating the highest quality research in the field (Journal Impact Factor 3.9). The journal is published three times a year and is the official publication of the <a title="Society for Learning Analytics Research (SoLAR)" href="http://solaresearch.org/"> Society for Learning Analytics Research (SoLAR)</a>. With an international <a href="https://learning-analytics.info/index.php/JLA/about/editorialTeam">Editorial Board</a> comprised of leading scholars, it is the first journal dedicated to research into the challenges of collecting, analysing and reporting data with the specific intent to improve learning. 'Learning' is broadly defined across a range of contexts, including informal learning on the internet, formal academic study in institutions (primary/secondary/tertiary), and workplace learning.</p> <p>The journal seeks to connect learning analytics researchers, developers and practitioners who share a common interest in using data traces to better understand and improve learning through the creation and implementation of new tools and techniques, and the study of transformations they engender. The interdisciplinary focus of the journal recognizes that computational, pedagogical, institutional, policy and social perspectives must be brought into dialogue with each other to ensure that interventions and organizational systems serve the needs of all stakeholders. Together, these communities each bring a valuable lens to provide ongoing input, evaluation and critique of the conceptual, technical, and practical advances of the field.</p> <p>The <em>Journal of Learning Analytics</em> welcomes papers that either describe original research or offer a review of the state of the art in a particular area. The journal also welcomes practice-focused papers that detail learning analytics applications in real-world settings, provided that they offer innovative insights for advancing the field. Other paper types accepted include Data Reports, Tool Reports, Open Peer Commentary and Book Reviews. See the journal's <a href="https://learning-analytics.info/index.php/JLA/focusandscope"> Focus and Scope</a> for details.</p> <p><u>Manuscripts can be submitted to the <em>Journal of Learning Analytics</em> at any time.</u> Only manuscripts for special sections should be submitted by the specific date as defined in the call for papers of the special section. Special section paper calls can be found in the <a href="https://learning-analytics.info/index.php/JLA/announcement"> Announcements</a> section.</p> <p>The Journal of Learning Analytics provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. The journal does not charge authors submission or processing charges to submit – costs are covered by the Society for Learning Analytics Research.</p>SoLARen-USJournal of Learning Analytics1929-7750<p>TEST</p>Edulyze: Learning Analytics for Real-World Classrooms at Scale
https://learning-analytics.info/index.php/JLA/article/view/8367
<p><span dir="ltr" role="presentation">Classroom sensing systems can capture data on teacher-student behaviours and interactions at a scale far greater </span><span dir="ltr" role="presentation">than human observers can. These data, translated to multi-modal analytics, can provide meaningful insights to </span><span dir="ltr" role="presentation">educational stakeholders. However, complex data can be difficult to make sense of. In addition, analyses done on </span><span dir="ltr" role="presentation">these data are often limited by the organization of the underlying sensing system, and translating sensing data into </span><span dir="ltr" role="presentation">meaningful insights often requires custom analyses across different modalities. We present Edulyze, an analytics </span><span dir="ltr" role="presentation">engine that processes complex, multi-modal sensing data and translates them into a unified schema that is agnostic </span><span dir="ltr" role="presentation">to the underlying sensing system or classroom configuration. We evaluate Edulyze’s performance by integrating </span><span dir="ltr" role="presentation">three sensing systems (Edusense, ClassGaze, and Moodoo) and then present data analyses of five case studies of </span><span dir="ltr" role="presentation">relevant pedagogical research questions across these sensing systems. We demonstrate how Edulyze’s flexibility </span><span dir="ltr" role="presentation">and customizability allow us to answer a broad range of research questions made possible by Edulyze’s translation </span><span dir="ltr" role="presentation">of a breadth of raw sensing data from different sensing systems into relevant classroom analytics. </span></p>Prasoon PatidarTricia NgoonNeeharika VogetyNikhil BehariChris HarrisonJohn ZimmermanAmy OganYuvraj Agarwal
Copyright (c) 2024 Journal of Learning Analytics
http://creativecommons.org/licenses/by-nc-nd/4.0
2024-08-072024-08-0711229731310.18608/jla.2024.8367Learning At and From a Virtual Conference
https://learning-analytics.info/index.php/JLA/article/view/8247
<p>A relevant learning space for academics, especially junior researchers, is the academic conference. While conference participation has long been associated with personal attendance at the conference venue, virtual participation is becoming increasingly important. This study investigates the perceived value of a purely virtual academic conference for its participants by analyzing the evaluation feedback (N = 759) from three virtual and two face-to-face LAK conferences. For the purposes of this study, we derive a definition of conference value and identify factors contributing to the overall value rating of virtual academic conferences based on the existing literature. Results indicate a perceived value of virtual conferences comparable with that of face-to-face events, satisfaction with social interaction and topics of interest being the most important predictors. Our insights show that virtual conferences are valuable events for academic professional development and conference organizers can utilize these results to design a valuable event for their participants.</p>Nina SeidenbergIoana JivetMaren ScheffelVitomir KovanovićGrace LynchHendrik Drachsler
Copyright (c) 2024 Journal of Learning Analytics
http://creativecommons.org/licenses/by-nc-nd/4.0
2024-07-252024-07-2511228129610.18608/jla.2024.8247A Closer Look at Instructor Use and Sensemaking Processes of Analytics Dashboards
https://learning-analytics.info/index.php/JLA/article/view/7961
<p>There is a growing interest in the research and use of automated feedback dashboards that display classroom analytics; yet little is known about the detailed processes instructors use to make sense of these tools, and to determine the impact on their teaching practices. This research was conducted at a public Midwestern university within the context of an automated classroom observation and feedback implementation project. Fifteen engineering instructors engaged in this research. The overarching goal was to investigate instructor teaching beliefs, pedagogical practices, and sensemaking processes regarding dashboard use. A grounded theory approach was used to identify categories related to instructor perceptions. Results revealed that instructor experiences inform both their present use of the dashboard and consequential future actions. A model is presented that illustrates categories included in instructor pre-use, use, and post-use of an automated feedback dashboard. An extension to this model is presented and accompanied by recommendations for a more effective future use of automated dashboards. The model’s practical implications inform both instructors and designers on effective design and use of dashboards, ultimately paving a way to improve pedagogical practices and instruction</p>Dana AlZoubiEvrim Baran
Copyright (c) 2024 Journal of Learning Analytics
http://creativecommons.org/licenses/by-nc-nd/4.0
2024-04-292024-04-2911212210.18608/jla.2024.7961A Novel Deep Learning Model for Student Performance Prediction Using Engagement Data
https://learning-analytics.info/index.php/JLA/article/view/7985
<p>Technology-enhanced learning supported by virtual learning environments (VLEs) facilitates tutors and students. VLE platforms contain a wealth of information that can be used to mine insight regarding students’ learning behaviour and relationships between behaviour and academic performance, as well as to model data-driven decision-making. This study introduces a system that we termed ASIST: a novel Attention-aware convolutional Stacked BiLSTM network for student representation learning to predict their performance. ASIST exploits student academic registry, VLE click stream, and midterm continuous assessment information for their behaviour representation learning. ASIST jointly learns the student representation using five behaviour vectors. It processes the four sequential behaviour vectors using a separate stacked bidirectional long short term memory (LSTM) network. A deep convolutional neural network models the diurnal weekly interaction behaviour. It also employs the attention mechanism to assign weight to features based on their importance. Next, five encoded feature vectors are concatenated with the assessment information, and, finally, a softmax layer predicts the high-performer (H), moderate-performer (M), and at-risk (F) categories of students. We evaluate ASIST over three datasets from an Irish university, considering five evaluation metrics. ASIST achieves an area under the curve (AUC) score of 0.86 to 0.90 over the three datasets. It outperforms three baseline deep learning models and four traditional classification models. We also found that the attention mechanism has a slight impact on ASIST’s performance. The ablation analysis reveals that weekly event count has the greatest impact on ASIST, whereas diurnal weekly interaction has the least impact. The early prediction using the first seven weeks of data achieves an AUC of 0.83 up to 0.89 over the three datasets. In yearly analysis, ASIST performs best over the 2018/19 dataset and worst over the 2020/21 dataset.</p>Mohd FazilAngélica RísquezClaire Halpin
Copyright (c) 2024 Journal of Learning Analytics
http://creativecommons.org/licenses/by-nc-nd/4.0
2024-05-122024-05-12112234110.18608/jla.2024.7985When Leaving is Persisting
https://learning-analytics.info/index.php/JLA/article/view/8219
<p>We report on a large-scale, log-based study of the associations between persistence and success in an online game-based learning environment for elementary school mathematics. While working with applets, learners can rerun a task after completing it or can halt before completing and rerun it again; both of these mechanisms may improve the score. We analyzed about 3.1 million applet runs by N=44,323 1st–6th-grade students to have a nuanced understanding of persistence patterns, by identifying sequences of consecutive single applet runs (SoCSARs). Overall, we analyzed 2,249,647 SoCSARs and identified six patterns, based on halting and rerunning tasks, and their completion: 1) Single Complete, 2) Single Incomplete, 3) Some Incomplete and Single Complete, 4) Multiple Incomplete and No Complete, 5) Multiple Complete and No Incomplete, and 6) Multiple Complete and Some Incomplete. Expectedly, we found a positive correlation between SoCSAR length and success. Some patterns demonstrate low to medium positive associations with success, while others demonstrate low to medium negative associations. Furthermore, the associations between the type of persistence and success vary by grade level. We discuss these complex relationships and suggest metacognitive and motivational factors that may explain why some patterns are productive and others are not.</p>Orly Klein-LatuchaArnon Hershkovitz
Copyright (c) 2024 Journal of Learning Analytics
http://creativecommons.org/licenses/by-nc-nd/4.0
2024-05-222024-05-22112425110.18608/jla.2023.8219A Systematic Review of Learning Analytics
https://learning-analytics.info/index.php/JLA/article/view/8093
<p>The learning management system (LMS) is widely used in educational settings to support teaching and learning practices. The usage log data, generated by both learners and instructors, enables the development and implementation of learning analytics (LA) interventions aimed at facilitating teaching and learning activities. To examine the current status of the development and empirical impacts of learning analytics–incorporated interventions within LMSs on improving teaching and learning practices, we conducted a systematic review that examined 27 articles published from 2012 through 2023. The outcomes of this review provided valuable insights into the design and development of learning analytics–incorporated interventions implemented on LMSs and empirical evidence of the impacts of these interventions, along with implications to inform future design and applications.</p>Zilong PanLauren Biegley Allen TaylorHua Zheng
Copyright (c) 2024 Journal of Learning Analytics
http://creativecommons.org/licenses/by-nc-nd/4.0
2024-05-222024-05-22112527210.18608/jla.2023.8093The Impact of Attribute Noise on the Automated Estimation of Collaboration Quality Using Multimodal Learning Analytics in Authentic Classrooms
https://learning-analytics.info/index.php/JLA/article/view/8253
<p>Multimodal learning analytics (MMLA) research has shown the feasibility of building automated models of collaboration quality using artificial intelligence (AI) techniques (e.g., supervised machine learning (ML)), thus enablingthe development of monitoring and guiding tools for computer-supported collaborative learning (CSCL). However, the practical applicability and performance of these automated models in authentic settings remains largely an under-researched area. In such settings, the quality of data features or attributes is often affected by noise, which is referred to as attribute noise. This paper undertakes a systematic exploration of the impact of attribute noise on the performance of different collaboration-quality estimation models. Moreover, we also perform a comparative analysis of different ML algorithms in terms of their capability of dealing with attribute noise. We employ four ML algorithms that have often been used for collaboration-quality estimation tasks due to their high performance: random forest, naive Bayes, decision tree, and AdaBoost. Our results show that random forest and decision tree outperformed other algorithms for collaboration-quality estimation tasks in the presence of attribute noise. The study contributes to the MMLA (and learning analytics (LA) in general) and CSCL fields by illustrating how attribute noise impacts collaboration-quality model performance and which ML algorithms seem to be more robust to noise and thus more likely to perform well in authentic settings. Our research outcomes offer guidance to fellow researchers and developers of (MM)LA systems employing AI techniques with multimodal data to model collaboration-related constructs in authentic classroom settings.</p>Pankaj ChejaraLuis P. PrietoYannis DimitriadisMaría Jesús Rodríguez-TrianaAdolfo Ruiz-CallejaReet KasepaluShashi Kant Shankar
Copyright (c) 2024 Journal of Learning Analytics
http://creativecommons.org/licenses/by-nc-nd/4.0
2024-06-202024-06-20112739010.18608/jla.2024.8253Visual Learning Analytics for Educational Interventions in Primary and Secondary Schools
https://learning-analytics.info/index.php/JLA/article/view/8309
<p class="JLA18Abstracttext">Visual Learning Analytics (VLA) uses analytics to monitor and assess educational data by combining visual and automated analysis to provide educational explanations. Such tools could aid teachers in primary and secondary schools in making pedagogical decisions, however, the evidence of their effectiveness and benefits is still limited. With this scoping review, we provide a comprehensive overview of related research on proposed VLA methods, as well as identifying any gaps in the literature that could assist in describing new and helpful directions to the field. This review searched all relevant articles in five accessible databases — Scopus, Web of Science, ERIC, ACM, and IEEE Xplore — using 40 keywords. These studies were mapped, categorized, and summarized based on their objectives, the collected data, the intervention approaches employed, and the results obtained. The results determined what affordances the VLA tools allowed, what kind of visualizations were used to inform teachers and students, and, more importantly, positive evidence of educational interventions. We conclude that there are moderate-to-clear learning improvements within the limit of the studies’ interventions to support the use of VLA tools. More systematic research is needed to determine whether any learning gains are translated into long-term improvements.</p>Zeynab MohseniItalo MasielloRafael M. MartinsSusanna Nordmark
Copyright (c) 2024 Journal of Learning Analytics
http://creativecommons.org/licenses/by-nc-nd/4.0
2024-06-302024-06-301129111110.18608/jla.2024.8309Adaptive Interventions Reducing Social Identity Threat to Increase Equity in Higher Distance Education
https://learning-analytics.info/index.php/JLA/article/view/8301
<p>Educational disparities between traditional and non-traditional student groups in higher distance education can potentially be reduced by alleviating social identity threat and strengthening students’ sense of belonging in the academic context. We present a use case of how Learning Analytics and Machine Learning can be applied to develop and implement an algorithm to classify students as at-risk of experiencing social identity threat. These students would be presented with an intervention fostering a sense of belonging. We systematically analyze the intervention’s intended positive consequences to reduce structural discrimination and increase educational equity, as well as potential risks based on privacy, data protection, and algorithmic fairness considerations. Finally, we provide recommendations for Higher Education Institutions to mitigate risk of bias and unintended consequences during algorithm development and implementation from an ethical perspective.</p>Laura FroehlichSebastian Weydner-Volkmann
Copyright (c) 2024 Journal of Learning Analytics
http://creativecommons.org/licenses/by-nc-nd/4.0
2024-07-042024-07-0411211212210.18608/jla.2024.8301Scaffolding Feedback Literacy
https://learning-analytics.info/index.php/JLA/article/view/8339
<p>Feedback is essential in learning. The emerging concept of feedback literacy underscores the skills students require for effective use of feedback. This highlights students’ responsibilities in the feedback process. Yet, there is currently a lack of mechanisms to understand how students make sense of feedback and whether they act on it. This gap makes it hard to effectively support students in feedback literacy development and improve the quality of feedback. As a specific application of learning analytics, feedback analytics (analytics on learner engagement with feedback) can offer insights into students’ learning engagement and progression, which can in turn be used to scaffold student feedback literacy. This study proposes a feedback analytics tool, designed with students, aimed at aiding students to synthesize feedback received from multiple sources, scaffold the sense-making process, and prompt deeper reflections or actions on feedback based on data about students’ interactions with feedback. We held focus group discussions with 38 students to learn about their feedback experiences and identified tool features. Based on identified user requirements, a prototype was developed and validated with 16 students via individual interviews. Based on the findings, we envision a feedback analytics tool with the aim of scaffolding student feedback literacy</p>Flora JinBhagya MaheshiRoberto Martinez-MaldonadoDragan GaševićYi-Shan Tsai
Copyright (c) 2024 Journal of Learning Analytics
http://creativecommons.org/licenses/by-nc-nd/4.0
2024-04-102024-04-1011212313710.18608/jla.2024.8339Enhancing Feedback Uptake and Self-Regulated Learning in Procedural Skills Training
https://learning-analytics.info/index.php/JLA/article/view/8195
<p class="JLA18Abstracttext">Remote technology has been widely incorporated into health professions education. For procedural skills training, effective feedback and reflection processes are required. Consequently, supporting a self-regulated learning (SRL) approach with learning analytics dashboards (LADs) has proven beneficial in online environments. Despite the potential of LADs, understanding their design to enhance SRL and provide useful feedback remains a significant challenge. Focusing on LAD design, implementation, and evaluation, the study followed a mixed-methods two-phase design-based research approach. The study used a triangulation methodology of qualitative interviews and SRL and sensemaking questionnaires to comprehensively understand the LAD’s effectiveness and student SRL and feedback uptake strategies during remote procedural skills training. Initial findings revealed the value students placed on performance visualization and peer comparison despite some challenges in LAD design and usability. The study also identified the prominent adoption of SRL strategies such as help-seeking, elaboration, and strategic planning. Sensemaking results showed the value of personalized performance metrics and planning resources in the LAD and recommendations to improve reflection and feedback uptake. Subsequent findings suggested that SRL levels significantly predicted the levels of sensemaking. The students valued the LAD as a tool for supporting feedback uptake and strategic planning, demonstrating the potential for enhancing procedural skills learning.</p>Ignacio VillagránRocio HernándezGregory SchuitAndrés NeyemJaviera FuentesLoreto LarrondoElisa MargozziniMaría T. HurtadoZoe IriarteConstanza MirandaJulián VarasIsabel Hilliger
Copyright (c) 2024 Journal of Learning Analytics
http://creativecommons.org/licenses/by-nc-nd/4.0
2024-07-112024-07-1111213815610.18608/jla.2024.8195A Method for Developing Process-Based Assessments for Computational Thinking Tasks
https://learning-analytics.info/index.php/JLA/article/view/8291
<p>Computational thinking (CT) is a concept of growing importance to pre-university education. Yet, CT is often assessed through results, rather than by looking at the CT process itself. Process-based assessments, or assessments that model how a student completed a task, could instead investigate the process of CT as a formative assessment. In this work, we proposed an approach for developing process-based assessments using constructionist tasks specifically for CT assessment in K–12 contexts, with a focus on directly connecting programming artifacts to aspects of CT. We then illustrated such an assessment with 29 students who ranged in CT and programming experience. These students completed both a constructionist task and a traditional CT assessment. Data from the constructionist task was used to build a process-based assessment and results were compared between the two assessment methods. The process-based assessment produced groups of students who differed in their approach to the task with varying levels of success. However, there was no difference between groups of students in the scores on the traditional CT assessment. Process-based assessment from our approach may be useful as formative assessment to give process feedback, localized to the task given to students.</p>Sohum BhattKatrien VerbertWim Van Den Noortgate
Copyright (c) 2024 Journal of Learning Analytics
http://creativecommons.org/licenses/by-nc-nd/4.0
2024-07-252024-07-2511215717310.18608/jla.2024.8291How Does a Data-Informed Deliberate Change in Learning Design Impact Students’ Self-Regulated Learning Tactics?
https://learning-analytics.info/index.php/JLA/article/view/8083
<p>The current study measures the extent to which students’ self-regulated learning tactics and learning outcomes change as the result of a deliberate, data-driven improvement in the learning design of mastery-based online learning modules. In the original design, students were required to attempt the assessment once before being allowed to access the learning material. The improved design gave students the choice to skip the first attempt and access the learning material directly. Student learning tactics were measured using a multi-level clustering and process mining algorithm, and a quasi-experiment design was implemented to remove or reduce differences in extraneous factors, including content being covered, time of implementation, and naturally occurring fluctuations in student learning tactics. The analysis suggests that most students who chose to skip the first attempt were effectively self-regulating their learning and were thus successful in learning from the instructional materials. Students who would have failed the first attempt were much more likely to skip it than those who would have passed the first attempt. The new design also resulted in a small improvement in learning outcome and median learning time. The study demonstrates the creation of a closed loop between learning design and learning analytics: first, using learning analytics to inform improvements to the learning design, then assessing the effectiveness and impact of the improvements.</p>Zhongzhou ChenTom ZhangMichelle Taub
Copyright (c) 2024 Journal of Learning Analytics
http://creativecommons.org/licenses/by-nc-nd/4.0
2024-07-252024-07-2511217419610.18608/jla.2024.8083Learners’ Linguistic Alignment and Physiological Synchrony
https://learning-analytics.info/index.php/JLA/article/view/8287
<p>The theory of socially shared regulation of learning (SSRL) suggests that successful collaborative groups can identify and respond to trigger events stemming from cognitive or emotional obstacles in learning. Thus, to develop real-time support for SSRL, novel metrics are needed to identify different types of trigger events that invite SSRL. Our aim was to apply two metrics derived from different data streams to study how trigger events for SSRL shaped group linguistic alignment (based on audio data) and physiological synchrony (based on electrodermal activity data). The data came from six groups of students (N = 18) as they worked face-to-face on a collaborative learning task with one cognitive and two emotional trigger events. We found that the cognitive trigger event increased linguistic alignment in task-description words and led to physiological out-of-synchrony. The emotional trigger events decreased out-of-synchrony and increased high-arousal synchrony at the physiological level but did not affect linguistic alignment. Therefore, different metrics for studying markers and responses to different types of trigger events are needed, suggesting the necessity for multimodal learning analytics to support collaborative learning.</p>Joni LämsäJustin EdwardsEetu HaatajaMarta SobocinskiPaola PeñaAndy NguyenSanna Järvelä
Copyright (c) 2024 Journal of Learning Analytics
http://creativecommons.org/licenses/by-nc-nd/4.0
2014-07-042014-07-0411219721410.18608/jla.2024.8287Following the Impact Chain of the LA Cockpit
https://learning-analytics.info/index.php/JLA/article/view/8399
<p>This paper presents a teacher dashboard intervention study in secondary school practice involving teachers (n = 16) with their classes (n = 22) and students (n = 403). A quasi-experimental treatment-control group design was implemented to compare student learning outcomes between classrooms where teachers did not have access to the dashboard and classrooms where teachers had access to the dashboard. We examined different points in the impact chain of the “LA Cockpit,” a teacher dashboard with a feedback system through which teachers can send feedback to their students on student learning. To investigate this impact chain from teacher use of dashboards to student learning, we analyzed 1) teachers’ perceived technology acceptance of the LA Cockpit, 2) teacher feedback practices using the LA Cockpit, and 3) student knowledge gains as measured by pre- and post-tests. The analysis of n = 355 feedback messages sent by teachers through the LA Cockpit revealed that the dashboard assists teachers in identifying students facing difficulties and that teachers mostly provided process feedback, which is known to be effective for student learning. For student learning, significantly higher knowledge gains were found in the teacher dashboard condition compared to the control condition.</p>Onur KarademirLena BorgardsDaniele Di MitriSebastian StraußMarcus KubschMarkus BrobeilAdrian GrimmSebastian GombertNikol RummelKnut NeumannHendrik Drachsler
Copyright (c) 2024 Journal of Learning Analytics
http://creativecommons.org/licenses/by-nc-nd/4.0
2024-07-252024-07-2511221522810.18608/jla.2024.8399Large-Scale Assessments for Learning: A Human-Centred AI Approach to Contextualizing Test Performance
https://learning-analytics.info/index.php/JLA/article/view/8007
<p>Large-scale assessments play a key role in education: educators and stakeholders need to know what students know and can do, so that they can be prepared for education policies and interventions in teaching and learning. However, a score from the assessment may not be enough—educators need to know why students got low scores, how students engaged with the tasks and the assessment, and how students with different levels of skills worked through the assessment. Process data, combined with response data, reflect students’ test-taking processes and can provide educators such rich information, but manually labelling the complex data is hard to scale for large-scale assessments. From scratch, we leveraged machine learning techniques (including supervised, unsupervised, and active learning) and experimented with a general human-centred AI approach to help subject matter experts efficiently and effectively make sense of big data (including students’ interaction sequences with the digital assessment platform, such as response, timing, and tool use sequences) to provide process profiles, that is, a holistic view of students’ entire test-taking processes on the assessment, so that performance can be viewed in context. Process profiles may help identify different sources for low performance and help generate rich feedback to educators and policy makers. The released National Assessment of Educational Progress (NAEP) Grade 8 mathematics data were used to illustrate our proposed approach. </p>Hongwen GuoMatthew JohnsonKadriye ErcikanLuis SaldiviaMichelle Worthington
Copyright (c) 2024 Journal of Learning Analytics
http://creativecommons.org/licenses/by-nc-nd/4.0
2024-08-062024-08-0611222924510.18608/jla.2024.8007Unpacking the Complexity: Why Current Feedback Systems Fail to Improve Learner Self-Regulation of Participation in Collaborative Activities
https://learning-analytics.info/index.php/JLA/article/view/8355
<p>Even before the inception of the term learning analytics, researchers globally had been investigating the use of various feedback systems to support the self-regulation of participation and promote equitable contributions during collaborative learning activities. While some studies indicate positive effects for distinct subgroups of learners, a common finding is that the majority of learners do not modify their behaviour, even after repeated interventions. In this paper, we assessed one such system and, predictably, did not find measurable improvements in equitable participation. Informed by self-regulated learning theory, we conducted a mixed-methods study to explore the diverse paths that learners take in the self-regulation process initiated by the feedback. We found that the observed deviations from the expected path explain the difficulty in measuring a generalized effect. This study proposes a shift in research focus from merely improving the technological aspects of the system to a human- and pedagogicalcentred redesign that takes special consideration of how learners understand and process feedback to self-regulate their participation.</p>Xavier OchoaXiaomeng HuangAdam Charlton
Copyright (c) 2024 Journal of Learning Analytics
http://creativecommons.org/licenses/by-nc-nd/4.0
2024-08-062024-08-0611224626710.18608/jla.2024.8355Implementing Learning Analytics in Norway
https://learning-analytics.info/index.php/JLA/article/view/8241
<p>In June 2022, the Norwegian Expert Commission on Learning Analytics delivered an interim report to the Norwegian Minister of Education and Research. Motivated by the need to establish a solid foundation upon which to regulate and promote the use of learning analytics in the Norwegian educational sector, the Ministry asked the Expert Commission to investigate the relevant pedagogical, ethical, legal, and privacy issues. Addressing primary, secondary, higher, and vocational education, the interim report surveys the field of learning analytics and the regulatory environment across the contexts and analyzes its challenges and opportunities for Norwegian education. Four dilemmas — data, learning, governance, and competence — signal where greater knowledge, awareness, and reflection are needed, as well as the nature of necessary policy and regulatory choices. In this practical report, we offer insights on the use, development, and regulation of LA in different countries, describe the Expert Commission mandate, work method, and dilemmas, and conclude with a reflection on the relationship between research on learning analytics and the challenges that arise when implementing learning analytics in practice. This practical report is relevant for those interested in developing policies or practices surrounding the use of learning analytics at the local or national level.</p>Barbara WassonMichail GiannakosMarte Blikstad-BalasPer Henning UppstadMalcom LangfordEinar Duenger Bøhn
Copyright (c) 2024 Journal of Learning Analytics
http://creativecommons.org/licenses/by-nc-nd/4.0
2024-07-252024-07-2511226828010.18608/jla.2024.8241