72 research outputs found

    An Exploration of User Engagement in HCI

    Get PDF
    Engagement is a concept of the utmost importance in human computer interaction, not only for informing the design and implementation of interfaces, but also for enabling more sophisticated interfaces capable of adapting to users. While the notion of engagement is actively being studied in a diverse set of domains, the term has been used to refer to a number of related, but different concepts. This paper represents a first attempt at exploring a number of important concepts that the term has been used to refer to, of relevance to both human-human and human-machine interaction modelling

    Multimodal Observation and Interpretation of Subjects Engaged in Problem Solving

    Get PDF
    In this paper we present the first results of a pilot experiment in the capture and interpretation of multimodal signals of human experts engaged in solving challenging chess problems. Our goal is to investigate the extent to which observations of eye-gaze, posture, emotion and other physiological signals can be used to model the cognitive state of subjects, and to explore the integration of multiple sensor modalities to improve the reliability of detection of human displays of awareness and emotion. We observed chess players engaged in problems of increasing difficulty while recording their behavior. Such recordings can be used to estimate a participant's awareness of the current situation and to predict ability to respond effectively to challenging situations. Results show that a multimodal approach is more accurate than a unimodal one. By combining body posture, visual attention and emotion, the multimodal approach can reach up to 93% of accuracy when determining player's chess expertise while unimodal approach reaches 86%. Finally this experiment validates the use of our equipment as a general and reproducible tool for the study of participants engaged in screen-based interaction and/or problem solving

    Multi-modal fusion methods for robust emotion recognition using body-worn physiological sensors in mobile environments

    Get PDF
    High-accuracy physiological emotion recognition typically requires participants to wear or attach obtrusive sensors (e.g., Electroencephalograph). To achieve precise emotion recognition using only wearable body-worn physiological sensors, my doctoral work focuses on researching and developing a robust sensor fusion system among different physiological sensors. Developing such fusion system has three problems: 1) how to pre-process signals with different temporal characteristics and noise models, 2) how to train the fusion system with limited labeled data and 3) how to fuse multiple signals with inaccurate and inexact ground truth. To overcome these challenges, I plan to explore semi-supervised, weakly supervised and unsupervised machine learning methods to obtain precise emotion recognition in mobile environments. By developing such techniques, we can measure the user engagement with larger amounts of participants and apply the emotion recognition techniques in a variety of scenarios such as mobile video watching and online education

    Emotion research by the people, for the people

    Get PDF
    Emotion research will leap forward when its focus changes from comparing averaged statistics of self-report data across people experiencing emotion in laboratories to characterizing patterns of data from individuals and clusters of similar individuals experiencing emotion in real life. Such an advance will come about through engineers and psychologists collaborating to create new ways for people to measure, share, analyze, and learn from objective emotional responses in situations that truly matter to people. This approach has the power to greatly advance the science of emotion while also providing personalized help to participants in the research

    Acume: A New Visualization Tool for Understanding Facial Expression and Gesture Data

    Get PDF
    Facial and head actions contain significant affective information. To date, these actions have mostly been studied in isolation because the space of naturalistic combinations is vast. Interactive visualization tools could enable new explorations of dynamically changing combinations of actions as people interact with natural stimuli. This paper describes a new open-source tool that enables navigation of and interaction with dynamic face and gesture data across large groups of people, making it easy to see when multiple facial actions co-occur, and how these patterns compare and cluster across groups of participants. We share two case studies that demonstrate how the tool allows researchers to quickly view an entire corpus of data for single or multiple participants, stimuli and actions. Acume yielded patterns of actions across participants and across stimuli, and helped give insight into how our automated facial analysis methods could be better designed. The results of these case studies are used to demonstrate the efficacy of the tool. The open-source code is designed to directly address the needs of the face and gesture research community, while also being extensible and flexible for accommodating other kinds of behavioral data. Source code, application and documentation are available at http://affect.media.mit.edu/acume.Procter & Gamble Compan

    Automatic prediction of consistency among team members' understanding of group decisions in meetings

    Get PDF
    Occasionally, participants in a meeting can leave with different understandings of what had been discussed. For meetings that require immediate response (such as disaster response planning), the participants must share a common understanding of the decisions reached by the group to ensure successful execution of their mission. In such domains, inconsistency among individuals' understanding of the meeting results would be detrimental, as this can potentially degrade group performance. Thus, detecting the occurrence of inconsistencies in understanding among meeting participants is a desired capability for an intelligent system that would monitor meetings and provide feedback to spur stronger group understanding. In this paper, we seek to predict the consistency among team members' understanding of group decisions. We use self-reported summaries as a representative measure for team members' understanding following meetings, and present a computational model that uses a set of verbal and nonverbal features from natural dialogue. This model focuses on the conversational dynamics between the participants, rather than on what is being discussed. We apply our model to a real-world conversational dataset and show that its features can predict group consistency with greater accuracy than conventional dialogue features. We also show that the combination of verbal and nonverbal features in multimodal fusion improves several performance metrics, and that our results are consistent across different meeting phases.National Science Foundation (U.S.). Graduate Research Fellowship Program (2012150705

    Real-time Emotional State Detection from Facial Expression on Embedded Devices

    Get PDF
    From the last decade, researches on human facial emotion recognition disclosed that computing models built on regression modelling can produce applicable performance. However, many systems need extensive computing power to be run that prevents its wide applications such as robots and smart devices. In this proposed system, a real-time automatic facial expression system was designed, implemented and tested on an embedded device such as FPGA that can be a first step for a specific facial expression recognition chip for a social robot. The system was built and simulated in MATLAB and then was built on FPGA and it can carry out real time continuously emotional state recognition at 30 fps with 47.44% accuracy. The proposed graphic user interface is able to display the participant video and two dimensional predict labels of the emotion in real time together.The research presented in this paper was supported partially by the Slovak Research and Development Agency under the research projects APVV-15-0517 & APPV-15-0731 and by the Ministry of Education, Science, Research and Sport of the Slovak Republic under the project VEGA 1/0075/15
    • …
    corecore