2,555 research outputs found

    Tracing physical movement during practice-based learning through Multimodal Learning Analytics

    Get PDF
    In this paper, we pose the question, can the tracking and analysis of the physical movements of students and teachers within a Practice-Based Learning (PBL) environment reveal information about the learning process that is relevant and informative to Learning Analytics (LA) implementations? Using the example of trials conducted in the design of a LA system, we aim to show how the analysis of physical movement from a macro level can help to enrich our understanding of what is happening in the classroom. The results suggest that Multimodal Learning Analytics (MMLA) could be used to generate valuable information about the human factors of the collaborative learning process and we propose how this information could assist in the provision of relevant supports for small group work. More research is needed to confirm the initial findings with larger sample sizes and refine the data capture and analysis methodology to allow automation

    Student Modeling and Analysis in Adaptive Instructional Systems

    Get PDF
    There is a growing interest in developing and implementing adaptive instructional systems to improve, automate, and personalize student education. A necessary part of any such adaptive instructional system is a student model used to predict or analyze learner behavior and inform adaptation. To help inform researchers in this area, this paper presents a state-of-the-art review of 11 years of research (2010-2021) in student modeling, focusing on learner characteristics, learning indicators, and foundational aspects of dissimilar models. We mainly emphasize increased prediction accuracy when using multidimensional learner data to create multimodal models in real-world adaptive instructional systems. In addition, we discuss challenges inherent in real-world multimodal modeling, such as uncontrolled data collection environments leading to noisy data and data sync issues. Finally, we reinforce our findings and conclusions through an industry case study of an adaptive instructional system. In our study, we verify that adding multiple data modalities increases our model prediction accuracy from 53.3% to 69%. At the same time, the challenges encountered with our real-world case study, including uncontrolled data collection environment with inevitably noisy data, calls for synchronization and noise control strategies for data quality and usability

    Modelling collaborative problem-solving competence with transparent learning analytics: is video data enough?

    Get PDF
    In this study, we describe the results of our research to model collaborative problem-solving (CPS) competence based on analytics generated from video data. We have collected ~500 mins video data from 15 groups of 3 students working to solve design problems collaboratively. Initially, with the help of OpenPose, we automatically generated frequency metrics such as the number of the face-in-the-screen; and distance metrics such as the distance between bodies. Based on these metrics, we built decision trees to predict students' listening, watching, making, and speaking behaviours as well as predicting the students' CPS competence. Our results provide useful decision rules mined from analytics of video data which can be used to inform teacher dashboards. Although, the accuracy and recall values of the models built are inferior to previous machine learning work that utilizes multimodal data, the transparent nature of the decision trees provides opportunities for explainable analytics for teachers and learners. This can lead to more agency of teachers and learners, therefore can lead to easier adoption. We conclude the paper with a discussion on the value and limitations of our approach

    Meaningful Hand Gestures for Learning with Touch-based I.C.T.

    Get PDF
    The role of technology in educational contexts is becoming increasingly ubiquitous, with very few students and teachers able to engage in classroom learning activities without using some sort of Information Communication Technology (ICT). Touch-based computing devices in particular, such as tablets and smartphones, provide an intuitive interface where control and manipulation of content is possible using hand and finger gestures such as taps, swipes and pinches. Whilst these touch-based technologies are being increasingly adopted for classroom use, little is known about how the use of such gestures can support learning. The purpose of this study was to investigate how finger gestures used on a touch-based device could support learning

    Design-activity-sequence: A case study and polyphonic analysis of learning in a digital design thinking workshop

    Get PDF
    In this case study, we report on the outcomes of a one-day workshop on design thinking attended by participants from the Computer-Supported Collaborative Learning conference in Philadelphia in 2017. We highlight the interactions between the workshop design, structured as a design thinking process around the design of a digital environment for design thinking, and the diverse backgrounds and interests of its participants. Data from in-workshop reflections and post-workshop interviews were analyzed using a novel set of analytical approaches, a combination the facilitators made by possible by welcoming participants as coresearchers

    Detecting Drowsy Learners at the Wheel of e-Learning Platforms with Multimodal Learning Analytics

    Get PDF
    Learners are expected to stay wakeful and focused while interacting with e-learning platforms. Although wakefulness of learners strongly relates to educational outcomes, detecting drowsy learning behaviors only from log data is not an easy task. In this study, we describe the results of our research to model learners’ wakefulness based on multimodal data generated from heart rate, seat pressure, and face recognition. We collected multimodal data from learners in a blended course of informatics and conducted two types of analysis on them. First, we clustered features based on learners’ wakefulness labels as generated by human raters and ran a statistical analysis. This analysis helped us generate insights from multimodal data that can be used to inform learner and teacher feedback in multimodal learning analytics. Second, we trained machine learning models with multiclass-Support Vector Machine (SVM), Random Forest (RF) and CatBoost Classifier (CatBoost) algorithms to recognize learners’ wakefulness states automatically. We achieved an average macro-F1 score of 0.82 in automated user-dependent models with CatBoost. We also showed that compared to unimodal data from each sensor, the multimodal sensor data can improve the accuracy of models predicting the wakefulness states of learners while they are interacting with e-learning platforms

    AI in Learning: Designing the Future

    Get PDF
    AI (Artificial Intelligence) is predicted to radically change teaching and learning in both schools and industry causing radical disruption of work. AI can support well-being initiatives and lifelong learning but educational institutions and companies need to take the changing technology into account. Moving towards AI supported by digital tools requires a dramatic shift in the concept of learning, expertise and the businesses built off of it. Based on the latest research on AI and how it is changing learning and education, this book will focus on the enormous opportunities to expand educational settings with AI for learning in and beyond the traditional classroom. This open access book also introduces ethical challenges related to learning and education, while connecting human learning and machine learning. This book will be of use to a variety of readers, including researchers, AI users, companies and policy makers

    The Multimodal Tutor: Adaptive Feedback from Multimodal Experiences

    Get PDF
    This doctoral thesis describes the journey of ideation, prototyping and empirical testing of the Multimodal Tutor, a system designed for providing digital feedback that supports psychomotor skills acquisition using learning and multimodal data capturing. The feedback is given in real-time with machine-driven assessment of the learner's task execution. The predictions are tailored by supervised machine learning models trained with human annotated samples. The main contributions of this thesis are: a literature survey on multimodal data for learning, a conceptual model (the Multimodal Learning Analytics Model), a technological framework (the Multimodal Pipeline), a data annotation tool (the Visual Inspection Tool) and a case study in Cardiopulmonary Resuscitation training (CPR Tutor). The CPR Tutor generates real-time, adaptive feedback using kinematic and myographic data and neural networks

    Fourteenth Biennial Status Report: März 2017 - February 2019

    No full text
    • …
    corecore