10,756 research outputs found
A review on data fusion in multimodal learning analytics and educational data mining
The new educational models such as smart learning environments use of digital and context-aware devices to facilitate the learning process. In this new educational scenario, a huge quantity of multimodal students' data from a variety of different sources can be captured, fused, and analyze. It offers to researchers and educators a unique opportunity of being able to discover new knowledge to better understand the learning process and to intervene if necessary. However, it is necessary to apply correctly data fusion approaches and techniques in order to combine various sources of multimodal learning analytics (MLA). These sources or modalities in MLA include audio, video, electrodermal activity data, eye-tracking, user logs, and click-stream data, but also learning artifacts and more natural human signals such as gestures, gaze, speech, or writing. This survey introduces data fusion in learning analytics (LA) and educational data mining (EDM) and how these data fusion techniques have been applied in smart learning. It shows the current state of the art by reviewing the main publications, the main type of fused educational data, and the data fusion approaches and techniques used in EDM/LA, as well as the main open problems, trends, and challenges in this specific research area
Benchmarking Robustness of AI-enabled Multi-sensor Fusion Systems: Challenges and Opportunities
Multi-Sensor Fusion (MSF) based perception systems have been the foundation
in supporting many industrial applications and domains, such as self-driving
cars, robotic arms, and unmanned aerial vehicles. Over the past few years, the
fast progress in data-driven artificial intelligence (AI) has brought a
fast-increasing trend to empower MSF systems by deep learning techniques to
further improve performance, especially on intelligent systems and their
perception systems. Although quite a few AI-enabled MSF perception systems and
techniques have been proposed, up to the present, limited benchmarks that focus
on MSF perception are publicly available. Given that many intelligent systems
such as self-driving cars are operated in safety-critical contexts where
perception systems play an important role, there comes an urgent need for a
more in-depth understanding of the performance and reliability of these MSF
systems. To bridge this gap, we initiate an early step in this direction and
construct a public benchmark of AI-enabled MSF-based perception systems
including three commonly adopted tasks (i.e., object detection, object
tracking, and depth completion). Based on this, to comprehensively understand
MSF systems' robustness and reliability, we design 14 common and realistic
corruption patterns to synthesize large-scale corrupted datasets. We further
perform a systematic evaluation of these systems through our large-scale
evaluation. Our results reveal the vulnerability of the current AI-enabled MSF
perception systems, calling for researchers and practitioners to take
robustness and reliability into account when designing AI-enabled MSF.Comment: Accepted by ESEC/FSE 202
- …