37 research outputs found

    Computerized Identification of Mental State for Mental Health Care

    Get PDF
    Introduction Sebastian, a 30-year-old man, was just diagnosed with depression and is undergoing cognitive-behavioral therapy (CBT) sessions on a weekly basis. As he goes about his daily routine, he wears a gadget that measures his physiological activity. His therapist enters his physiological data for the week into a computer software at the start of each CBT session. The computer compiles Sebastian's levels of happy and negative affect and pinpoints the times when these reactions peaked. The therapist utilizes this information to track Sebastian's development and dynamically customize the therapy, for as by asking Sebastian to recollect experiences that correlate to some of the emotion peaks

    A study on affect model validity : nominal vs ordinal labels

    Get PDF
    The question of representing emotion computationally remains largely unanswered: popular approaches require annotators to assign a magnitude (or a class) of some emotional dimension, while an alternative is to focus on the relationship between two or more options. Recent evidence in affective computing suggests that following a methodology of ordinal annotations and processing leads to better reliability and validity of the model. This paper compares the generality of classification methods versus preference learning methods in predicting the levels of arousal in two widely used affective datasets. Findings of this initial study further validate the hypothesis that approaching affect labels as ordinal data and building models via preference learning yields models of better validity.peer-reviewe

    Emotion Recognition based on Multimodal Information

    Get PDF

    Recognising Complex Mental States from Naturalistic Human-Computer Interactions

    Get PDF
    New advances in computer vision techniques will revolutionize the way we interact with computers, as they, together with other improvements, will help us build machines that understand us better. The face is the main non-verbal channel for human-human communication and contains valuable information about emotion, mood, and mental state. Affective computing researchers have investigated widely how facial expressions can be used for automatically recognizing affect and mental states. Nowadays, physiological signals can be measured by video-based techniques, which can also be utilised for emotion detection. Physiological signals, are an important indicator of internal feelings, and are more robust against social masking. This thesis focuses on computer vision techniques to detect facial expression and physiological changes for recognizing non-basic and natural emotions during human-computer interaction. It covers all stages of the research process from data acquisition, integration and application. Most previous studies focused on acquiring data from prototypic basic emotions acted out under laboratory conditions. To evaluate the proposed method under more practical conditions, two different scenarios were used for data collection. In the first scenario, a set of controlled stimulus was used to trigger the user’s emotion. The second scenario aimed at capturing more naturalistic emotions that might occur during a writing activity. In the second scenario, the engagement level of the participants with other affective states was the target of the system. For the first time this thesis explores how video-based physiological measures can be used in affect detection. Video-based measuring of physiological signals is a new technique that needs more improvement to be used in practical applications. A machine learning approach is proposed and evaluated to improve the accuracy of heart rate (HR) measurement using an ordinary camera during a naturalistic interaction with computer

    Recognising Complex Mental States from Naturalistic Human-Computer Interactions

    Get PDF
    New advances in computer vision techniques will revolutionize the way we interact with computers, as they, together with other improvements, will help us build machines that understand us better. The face is the main non-verbal channel for human-human communication and contains valuable information about emotion, mood, and mental state. Affective computing researchers have investigated widely how facial expressions can be used for automatically recognizing affect and mental states. Nowadays, physiological signals can be measured by video-based techniques, which can also be utilised for emotion detection. Physiological signals, are an important indicator of internal feelings, and are more robust against social masking. This thesis focuses on computer vision techniques to detect facial expression and physiological changes for recognizing non-basic and natural emotions during human-computer interaction. It covers all stages of the research process from data acquisition, integration and application. Most previous studies focused on acquiring data from prototypic basic emotions acted out under laboratory conditions. To evaluate the proposed method under more practical conditions, two different scenarios were used for data collection. In the first scenario, a set of controlled stimulus was used to trigger the user’s emotion. The second scenario aimed at capturing more naturalistic emotions that might occur during a writing activity. In the second scenario, the engagement level of the participants with other affective states was the target of the system. For the first time this thesis explores how video-based physiological measures can be used in affect detection. Video-based measuring of physiological signals is a new technique that needs more improvement to be used in practical applications. A machine learning approach is proposed and evaluated to improve the accuracy of heart rate (HR) measurement using an ordinary camera during a naturalistic interaction with computer

    A study of the dynamic relation between physiological changes and spontaneous expressions

    Get PDF
    Recent progress in Affective Computing (AC) has enabled integration of physiological cues and spontaneous expressions to reveal a subject’s emotional state. Due to the lack of an effective technique for evaluating multimodal correlations, experience and intuition play a main role in present AC studies when fusing affective cues or modalities, resulting in unexpected outcomes. This study seeks to demonstrate a dynamic correlation between two such affective cues, physiological changes and spontaneous expressions, which were obtained by a combination of stereo vision based tracking and imaging photoplethysmography (iPPG), with a designed protocol involving 20 healthy subjects. The two cues obtained were sampled into a Statistical Association Space (SAS) to evaluate their dynamic correlation. It is found that the probability densities in the SAS increase as the peaks in two cues are approached. Also the complex form of the high probability density region in the SAS suggests a nonlinear correlation between two cues. Finally the cumulative distribution on the zero time-difference surface is found to be small (<0.047) demonstrating a lack of simultaneity. These results show that the two cues have a close interrelation, that is both asynchronous and nonlinear, in which a peak of one cue heralds a peak in the other

    On driver behavior recognition for increased safety:A roadmap

    Get PDF
    Advanced Driver-Assistance Systems (ADASs) are used for increasing safety in the automotive domain, yet current ADASs notably operate without taking into account drivers’ states, e.g., whether she/he is emotionally apt to drive. In this paper, we first review the state-of-the-art of emotional and cognitive analysis for ADAS: we consider psychological models, the sensors needed for capturing physiological signals, and the typical algorithms used for human emotion classification. Our investigation highlights a lack of advanced Driver Monitoring Systems (DMSs) for ADASs, which could increase driving quality and security for both drivers and passengers. We then provide our view on a novel perception architecture for driver monitoring, built around the concept of Driver Complex State (DCS). DCS relies on multiple non-obtrusive sensors and Artificial Intelligence (AI) for uncovering the driver state and uses it to implement innovative Human–Machine Interface (HMI) functionalities. This concept will be implemented and validated in the recently EU-funded NextPerception project, which is briefly introduced
    corecore