13 research outputs found

    Multi-modal fusion methods for robust emotion recognition using body-worn physiological sensors in mobile environments

    Get PDF
    High-accuracy physiological emotion recognition typically requires participants to wear or attach obtrusive sensors (e.g., Electroencephalograph). To achieve precise emotion recognition using only wearable body-worn physiological sensors, my doctoral work focuses on researching and developing a robust sensor fusion system among different physiological sensors. Developing such fusion system has three problems: 1) how to pre-process signals with different temporal characteristics and noise models, 2) how to train the fusion system with limited labeled data and 3) how to fuse multiple signals with inaccurate and inexact ground truth. To overcome these challenges, I plan to explore semi-supervised, weakly supervised and unsupervised machine learning methods to obtain precise emotion recognition in mobile environments. By developing such techniques, we can measure the user engagement with larger amounts of participants and apply the emotion recognition techniques in a variety of scenarios such as mobile video watching and online education

    Cross-participant and cross-task classification of cognitive load based on eye tracking

    Get PDF
    Cognitive load refers to the total amount of working memory resources a person is currently using. Successfully detecting the cognitive load a person is experiencing is the first important step towards applications that adapt to a user’s current load. Provided that cognitive load is estimated correctly, a system can enhance a user’s experience or increase its own efficiency by adapting to this detected load. Using digital learning environments as an example to illustrate this idea, a learning environment could tune the difficulty of presented exercises or learning material to match the learner’s current load to not underwhelm them, but also to prevent overload and frustration. Physiological sensors have great promise when cognitive load estimation is concerned as many physiological signals show distinctive signs of cognitive load. Eye tracking is an especially promising candidate as it does not require physical contact between sensor and user and is therefore very subtle. A major problem is the lack of general classifiers for cognitive load as classifiers are usually specific to a single person and do not generalize well. For adaptive interfaces based on a user’s cognitive load to be viable, a classifier that is accurate and performs well independently of user and specific task would be needed. In the current doctoral thesis, I present four studies that successively build upon each other and build up towards an eye-tracking based classifier for cognitive load that is 1) accurate, 2) robust, 3) can generalize, and 4) can operate in real-time. Each of the presented studies advances our approach’s capability to generalize one step further. Along the way, different eye-tracking features are explored and evaluated for their suitability as predictors of cognitive load and the implications for the distinction between cognitive load and perceptual load are discussed. The resulting method demonstrates a degree of generalization that no other approach has achieved and combines it with low hardware requirements and high robustness into a method that has great promise for future applications. Overall, the results presented in this thesis may serve as a foundation for the use of eye tracking in adaptive interfaces that react to a user’s cognitive load

    On the Use of Large Interactive Displays to Support Collaborative Engagement and Visual Exploratory Tasks

    Get PDF
    Large interactive displays can provide suitable workspaces for learners to conduct collaborative learning tasks with visual information in co-located settings. In this research, we explored the use of these displays to support collaborative engagement and exploratory tasks with visual representations. Our investigation looked at the effect of four factors (number of virtual workspaces within the display, number of displays, position arrangement of the collaborators, and collaborative modes of interaction) on learners' knowledge acquisition, engagement level, and task performance. To this end, a user study was conducted with 72 participants divided into 6 groups using an interactive tool developed to support the collaborative exploration of 3D visual structures. The results of this study showed that learners with one shared workspace and one single display can achieve better user performance and engagement levels. In addition, the back-to-back position with learners sharing their view and control of the workspaces was the most favorable. It also led to improved learning outcomes and engagement levels during the collaboration process
    corecore