5 research outputs found
Recommended from our members
Predictive learning analytics in online education: A deeper understanding through explaining algorithmic errors
Existing Predictive Learning Analytics (PLA) systems utilising machine learning models show they can improve teacher practice and, at the same time, student outcomes. The accuracy, and related errors, of these systems can negatively influence their adoption. However, little effort has been made to investigate the errors made by the underlying models. This study focused on errors of models predicting students at risk of not submitting their assignments. We analysed two groups of error when the model was confident about the prediction: (a) students predicted to submit their assignment, yet they did not (False Negative; FN), and (b) students predicted not to submit their assignment yet they did (False Positive; FP). We followed the principles of thematic analysis to analyse interview data from 27 students whose predictions presented FN or FP errors. Findings revealed the significance of unexpected events occurring during studies that can affect students' behaviour and cannot be foreseen and accounted for in PLA, such as changes in family and work responsibilities, unexpected health issues and computer problems. Interview data helped identify new data sources, which could be integrated into predictions to mitigate some of the errors, such as study loan application information. Some other sources, e.g. capturing student knowledge at the start of the course, would require changes in the learning design of courses. Our insights showcase the importance of complimenting AI-based systems with human intelligence. In our case, these were both the interviewed students providing insights, as well potential users of these systems, e.g. teachers, who are aware of contextual factors, invisible to ML algorithms. We discuss the implications for improving predictions, learning design and teacher training in using PLA in their practice
Relevance of learning analytics to measure and support students' learning in adaptive educational technologies
Contains fulltext :
175341.pdf (publisher's version ) (Open Access)In this poster, we describe the aim and current activities of the EARLI-Centre for Innovative Research (E-CIR) "Measuring and Supporting Student's Self-Regulated Learning in Adaptive Educational Technologies" which is funded by the European Association for Research on Learning and Instruction (EARLI) from 2015 to 2019. The aim is to develop our understanding of multimodal data that unobtrusively capture cognitive, meta-cognitive, affective and motivational states of learners over time. This demands for a concerted interdisciplinary dialogue combining findings from psychology and educational sciences with advances in computer sciences and artificial intelligence. The participants in this E-CIR are leading international researchers who have articulated different emerging perspectives and methodologies to measure cognition, metacognition, motivation, and emotions during learning. The participants recognize the need for intensive collaboration to accelerate progress with new interdisciplinary methods including learning analytics to develop more powerful adaptive educational technologies.LAK '17: Seventh International Learning Analytics & Knowledge Conference (Vancouver, British Columbia, Canada, March 13 - 17, 2017
A Prediction-Based Framework to Reduce Procrastination in Adaptive Learning Systems
Procrastination and other types of dilatory behaviour are common in online learning, especially in higher education. While procrastination is associated with worse performance and discomfort, positive forms of delay can be used as a deliberate strategy without any such consequences. Although dilatory behaviour has received attention in research, it has to my knowledge never been included as an integral part of an adaptive learning system. Differentiating between different types of delay within such a system would allow for tailored interventions to be provided in the future without alienating students who use delay as a successful strategy. In this thesis, I present four studies that provide the basis for such an endeavour. I first discuss the results of two studies that focussed on the prediction of the extent of dilatory behaviour in online assignments. The results of both studies revealed an advantage of objective predictors based on log data over subjective variables based on questionnaires. The predictive performance slightly improved when both sets of predictors were combined. In one of these studies, we implemented Bayesian multilevel models while the other aimed at comparing various machine learning algorithms to determine the best candidates for a future inclusion in real-time predictive models. The results reveal that the most suitable algorithm depended on the type of predictor, implying that multiple models should be implemented in the field, rather than selecting just one. I then present a framework for an adaptive learning system based on the other two studies, where I highlight how dilatory behaviour can be incorporated into such a system, in light of the previously discussed results. I conclude this thesis by providing an outlook into the necessary next steps before an adaptive learning system focussing on delay can be established
The Big Five:Addressing Recurrent Multimodal Learning Data Challenges
The analysis of multimodal data in learning is a growing field of research, which
has led to the development of different analytics solutions. However, there is no
standardised approach to handle multimodal data. In this paper, we describe and outline a
solution for five recurrent challenges in the analysis of multimodal data: the data collection,
storing, annotation, processing and exploitation. For each of these challenges, we envision
possible solutions. The prototypes for some of the proposed solutions will be discussed
during the Multimodal Challenge of the fourth Learning Analytics & Knowledge Hackathon, a
two-day hands-on workshop in which the authors will open up the prototypes for trials,
validation and feedback
Multimodal Challenge: Analytics Beyond User-computer Interaction Data
This contribution describes one the challenges explored in the Fourth LAK Hackathon. This challenge aims at shifting the focus from learning situations which can be easily traced through user-computer interactions data and concentrate more on user-world interactions events, typical of co-located and practice-based learning experiences. This mission, pursued by the multimodal learning analytics (MMLA) community, seeks to bridge
the gap between digital and physical learning spaces. The “multimodal” approach consists in combining learners’ motoric actions with physiological responses and data about the learning contexts. These data can be collected through multiple wearable sensors and Internet of Things (IoT) devices. This Hackathon table will confront with three main challenges arising from the analysis and valorisation of multimodal datasets: 1) the data
collection and storing, 2) the data annotation, 3) the data processing and exploitation. Some research questions which will be considered in this Hackathon challenge are the following: how to process the raw sensor data streams and extract relevant features? which data mining and machine learning techniques can be applied? how can we compare two action recordings? How to combine sensor data with Experience API (xAPI)? what are meaningful visualisations for these data