2 research outputs found
The Big Five:Addressing Recurrent Multimodal Learning Data Challenges
The analysis of multimodal data in learning is a growing field of research, which
has led to the development of different analytics solutions. However, there is no
standardised approach to handle multimodal data. In this paper, we describe and outline a
solution for five recurrent challenges in the analysis of multimodal data: the data collection,
storing, annotation, processing and exploitation. For each of these challenges, we envision
possible solutions. The prototypes for some of the proposed solutions will be discussed
during the Multimodal Challenge of the fourth Learning Analytics & Knowledge Hackathon, a
two-day hands-on workshop in which the authors will open up the prototypes for trials,
validation and feedback
Multimodal Challenge: Analytics Beyond User-computer Interaction Data
This contribution describes one the challenges explored in the Fourth LAK Hackathon. This challenge aims at shifting the focus from learning situations which can be easily traced through user-computer interactions data and concentrate more on user-world interactions events, typical of co-located and practice-based learning experiences. This mission, pursued by the multimodal learning analytics (MMLA) community, seeks to bridge
the gap between digital and physical learning spaces. The “multimodal” approach consists in combining learners’ motoric actions with physiological responses and data about the learning contexts. These data can be collected through multiple wearable sensors and Internet of Things (IoT) devices. This Hackathon table will confront with three main challenges arising from the analysis and valorisation of multimodal datasets: 1) the data
collection and storing, 2) the data annotation, 3) the data processing and exploitation. Some research questions which will be considered in this Hackathon challenge are the following: how to process the raw sensor data streams and extract relevant features? which data mining and machine learning techniques can be applied? how can we compare two action recordings? How to combine sensor data with Experience API (xAPI)? what are meaningful visualisations for these data