1,972 research outputs found

    Emotions in context: examining pervasive affective sensing systems, applications, and analyses

    Get PDF
    Pervasive sensing has opened up new opportunities for measuring our feelings and understanding our behavior by monitoring our affective states while mobile. This review paper surveys pervasive affect sensing by examining and considering three major elements of affective pervasive systems, namely; “sensing”, “analysis”, and “application”. Sensing investigates the different sensing modalities that are used in existing real-time affective applications, Analysis explores different approaches to emotion recognition and visualization based on different types of collected data, and Application investigates different leading areas of affective applications. For each of the three aspects, the paper includes an extensive survey of the literature and finally outlines some of challenges and future research opportunities of affective sensing in the context of pervasive computing

    Associating Facial Expressions and Upper-Body Gestures with Learning Tasks for Enhancing Intelligent Tutoring Systems

    Get PDF
    Learning involves a substantial amount of cognitive, social and emotional states. Therefore, recognizing and understanding these states in the context of learning is key in designing informed interventions and addressing the needs of the individual student to provide personalized education. In this paper, we explore the automatic detection of learner’s nonverbal behaviors involving hand-over-face gestures, head and eye movements and emotions via facial expressions during learning. The proposed computer vision-based behavior monitoring method uses a low-cost webcam and can easily be integrated with modern tutoring technologies. We investigate these behaviors in-depth over time in a classroom session of 40 minutes involving reading and problem-solving exercises. The exercises in the sessions are divided into three categories: an easy, medium and difficult topic within the context of undergraduate computer science. We found that there is a significant increase in head and eye movements as time progresses, as well as with the increase of difficulty level. We demonstrated that there is a considerable occurrence of hand-over-face gestures (on average 21.35%) during the 40 minutes session and is unexplored in the education domain. We propose a novel deep learning approach for automatic detection of hand-over-face gestures in images with a classification accuracy of 86.87%. There is a prominent increase in hand-over-face gestures when the difficulty level of the given exercise increases. The hand-over-face gestures occur more frequently during problem-solving (easy 23.79%, medium 19.84% and difficult 30.46%) exercises in comparison to reading (easy 16.20%, medium 20.06% and difficult 20.18%)

    Face for Interface (revisited)

    Get PDF

    Brief Report: Is Impaired Classification of Subtle Facial Expressions in Children with Autism Spectrum Disorders Related to Atypical Emotion Category Boundaries?

    Get PDF
    Impairments in recognizing subtle facial expressions, in individuals with autism spectrum disorder (ASD), may relate to difficulties in constructing prototypes of these expressions. Eighteen children with predominantly intellectual low-functioning ASD (LFA, IQ <80) and two control groups (mental and chronological age matched), were assessed for their ability to classify emotional faces, of high, medium and low intensities, as happy or angry. For anger, the LFA group made more errors for lower intensity expressions than the control groups, classifications did not differ for happiness. This is the first study to find that the LFA group made more across-valence errors than controls. These data are consistent with atypical facial expression processing in ASD being associated with differences in the structure of emotion categories

    A study of facial expression recognition technologies on deaf adults and their children

    Full text link
    Facial and head movements have important linguistic roles in American Sign Language (ASL) and other sign languages and can often significantly alter the meaning or interpretation of what is being communicated. Technologies that enable accurate recognition of ASL linguistic markers could be a step toward greater independence and empowerment for the Deaf community. This study involved gathering over 2,000 photographs of five hearing subjects, five Deaf subjects, and five Child of Deaf Adults (CODA) subjects. Each subject produced the six universal emotional facial expressions: sad, happy, surprise, anger, fear, and disgust. In addition, each Deaf and CODA subject produced six different ASL linguistic facial expressions. A representative set of 750 photos was submitted to six different emotional facial expression recognition services, and the results were processed and compared across different facial expressions and subject groups (hearing, Deaf, CODA). Key observations from these results are presented. First, poor face detection rates are observed for Deaf subjects as compared to hearing and CODA subjects. Second, emotional facial expression recognition appears to be more accurate for Deaf and CODA subjects than for hearing subjects. Third, ASL linguistic markers, which are distinct from emotional expressions, are often misinterpreted as negative emotions by existing technologies. Possible implications of this misinterpretation are discussed, such as the problems that could arise for the Deaf community with increasing surveillance and use of automated facial analysis tools. Finally, an inclusive approach is suggested for incorporating ASL linguistic markers into existing facial expression recognition tools. Several considerations are given for constructing an unbiased database of the various ASL linguistic markers, including the types of subjects that should be photographed and the importance of including native ASL signers in the photo selection and classification process.2019-06-30T00:00:00

    The social brain: neural basis of social knowledge

    Get PDF
    Social cognition in humans is distinguished by psychological processes that allow us to make inferences about what is going on inside other people—their intentions, feelings, and thoughts. Some of these processes likely account for aspects of human social behavior that are unique, such as our culture and civilization. Most schemes divide social information processing into those processes that are relatively automatic and driven by the stimuli, versus those that are more deliberative and controlled, and sensitive to context and strategy. These distinctions are reflected in the neural structures that underlie social cognition, where there is a recent wealth of data primarily from functional neuroimaging. Here I provide a broad survey of the key abilities, processes, and ways in which to relate these to data from cognitive neuroscience

    Unleashing the Power of VGG16: Advancements in Facial Emotion Recognization

    Get PDF
    In facial emotion detection, researchers are actively exploring effective methods to identify and understand facial expressions. This study introduces a novel mechanism for emotion identification using diverse facial photos captured under varying lighting conditions. A meticulously pre-processed dataset ensures data consistency and quality. Leveraging deep learning architectures, the study utilizes feature extraction techniques to capture subtle emotive cues and build an emotion classification model using convolutional neural networks (CNNs). The proposed methodology achieves an impressive 97% accuracy on the validation set, outperforming previous methods in terms of accuracy and robustness. Challenges such as lighting variations, head posture, and occlusions are acknowledged, and multimodal approaches incorporating additional modalities like auditory or physiological data are suggested for further improvement. The outcomes of this research have wide-ranging implications for affective computing, human-computer interaction, and mental health diagnosis, advancing the field of facial emotion identification and paving the way for sophisticated technology capable of understanding and responding to human emotions across diverse domains
    corecore