20 research outputs found

    New Approach of Estimating Sarcasm Based on the Percentage of Happiness of Facial Expression Using Fuzzy Inference System

    Get PDF
    The procedure of determining whether micro expressions are present is accorded a high priority in the majority of settings. This is due to the fact that despite the best attempts of the person, these expressions will always expose the genuine sentiments that are buried under the surface. The purpose of this study is to provide a novel approach to the problem of measuring sarcasm by using a fuzzy inference system. The method involves analysing a person's facial expressions to evaluate the degree to which they are taking pleasure in something. It is feasible to distinguish five separate areas of a person's face, and precise active distances may be determined from the outline points of each of these regions. This category includes the brows on both sides of the face, as well as the eyes and lips. In order to arrive at a representation of an individual's degree of happiness while working within the parameters of the fuzzy inference system that has been provided, membership functions are first applied to computed distances. After that, the findings from the membership functions are put to use in yet another membership function so that an estimate of the sarcasm percentage may be derived from them. The suggested method is validated by using photos of human faces taken from the SMIC, SAMM, and CAS(ME) 2 datasets, which are the industry standards. This helps to guarantee that the method is effective

    Investigation of Methods to Create Future Multimodal Emotional Data for Robot Interactions in Patients with Schizophrenia : A Case Study

    Get PDF
    Rapid progress in humanoid robot investigations offers possibilities for improving the competencies of people with social disorders, although this improvement of humanoid robots remains unexplored for schizophrenic people. Methods for creating future multimodal emotional data for robot interactions were studied in this case study of a 40-year-old male patient with disorganized schizophrenia without comorbidities. The qualitative data included heart rate variability (HRV), video-audio recordings, and field notes. HRV, Haar cascade classifier (HCC), and Empath API© were evaluated during conversations between the patient and robot. Two expert nurses and one psychiatrist evaluated facial expressions. The research hypothesis questioned whether HRV, HCC, and Empath API© are useful for creating future multimodal emotional data about robot–patient interactions. The HRV analysis showed persistent sympathetic dominance, matching the human–robot conversational situation. The result of HCC was in agreement with that of human observation, in the case of rough consensus. In the case of observed results disagreed upon by experts, the HCC result was also different. However, emotional assessments by experts using Empath API© were also found to be inconsistent. We believe that with further investigation, a clearer identification of methods for multimodal emotional data for robot interactions can be achieved for patients with schizophrenia

    On the effectiveness of facial expression recognition for evaluation of urban sound perception

    Get PDF
    Sound perception studies mostly depend on questionnaires with fixed indicators. Therefore, it is desirable to explore methods with dynamic outputs. The present study aims to explore the effects of sound perception in the urban environment on facial expressions using a software named FaceReader based on facial expression recognition (FER). The experiment involved three typical urban sound recordings, namely, traffic noise, natural sound, and community sound. A questionnaire on the evaluation of sound perception was also used, for comparison. The results show that, first, FER is an effective tool for sound perception research, since it is capable of detecting differences in participants' reactions to different sounds and how their facial expressions change over time in response to those sounds, with mean difference of valence between recordings from 0.019 to 0.059 (p < 0.05or p < 0.01). In a natural sound environment, for example, facial expression increased by 0.04 in the first 15 s and then went down steadily at 0.004 every 20 s. Second, the expression indices, namely, happy, sad, and surprised, change significantly under the effect of sound perception. In the traffic sound environment, for example, happy decreased by 0.012, sad increased by 0.032, and surprised decreased by 0.018. Furthermore, social characteristics such as distance from living place to natural environment (r = 0.313), inclination to communicate (r = 0.253), and preference for crowd (r = 0.296) have effects on facial expression. Finally, the comparison of FER and questionnaire survey results showed that in the traffic noise recording, valence in the first 20 s best represents acoustic comfort and eventfulness; for natural sound, valence in the first 40 s best represents pleasantness; and for community sound, valence in the first 20 s of the recording best represents acoustic comfort, subjective loudness, and calmness

    USING 3D FEATURE POINTS IN VIDEO IMAGES RELATIONSHIP BETWEEN BASIC EMOTIONS AND FACIAL EXPRESSIONS OF JAPANESE PEPOPLE

    Get PDF
    With the recent increase in remote meetings and other forms of communication, nonverbal communication that reads the emotions of others through facial expressions and gestures is becoming increasingly important. It is generally accepted that emotions and facial expressions are related, but most of the research has focused on Western human facial expressions. We measure and analyze data based on the assumption that emotions and facial expressions defined by Japanese people cannot be mapped. Test data for facial expressions are recorded as moving images of the subject\u27s specific emotional expressions using MediaPipe, which can capture 3D feature points. The correlation between basic emotions and facial expressions was analyzed by detecting change points based on the recorded 3D feature points

    Brief research report: autistic traits modulate the rapid detection of punishment-associated neutral faces

    Get PDF
    Speedy detection of faces with emotional value plays a fundamental role in social interactions. A few previous studies using a visual search paradigm have reported that individuals with high autistic traits (ATs), who are characterized by deficits in social interactions, demonstrated decreased detection performance for emotional facial expressions. However, whether ATs modulate the rapid detection of faces with emotional value remains inconclusive because emotional facial expressions involve salient visual features (i.e., a U-shaped mouth in a happy expression) that can facilitate visual attention. In order to disentangle the effects of visual factors from the rapid detection of emotional faces, we examined the rapid detection of neutral faces associated with emotional value among young adults with varying degrees of ATs in a visual search task. In the experiment, participants performed a learning task wherein neutral faces were paired with monetary reward, monetary punishment, or no monetary outcome, such that the neutral faces acquired positive, negative, or no emotional value, respectively. During the subsequent visual search task, previously learned neutral faces were presented as discrepant faces among newly presented neutral distractor faces, and the participants were asked to detect the discrepant faces. The results demonstrated a significant negative association between the degrees of ATs and an advantage in detecting punishment-associated neutral faces. This indicates the decreased detection of faces with negative value in individuals with higher ATs, which may contribute to their difficulty in making prompt responses in social situations

    Computational analysis of value learning and value-driven detection of neutral faces by young and older adults

    Get PDF
    The rapid detection of neutral faces with emotional value plays an important role in social relationships for both young and older adults. Recent psychological studies have indicated that young adults show efficient value learning for neutral faces and the detection of “value-associated faces,” while older adults show slightly different patterns of value learning and value-based detection of neutral faces. However, the mechanisms underlying these processes remain unknown. To investigate this, we applied hierarchical reinforcement learning and diffusion models to a value learning task and value-driven detection task that involved neutral faces; the tasks were completed by young and older adults. The results for the learning task suggested that the sensitivity of learning feedback might decrease with age. In the detection task, the younger adults accumulated information more efficiently than the older adults, and the perceptual time leading to motion onset was shorter in the younger adults. In younger adults only, the reward sensitivity during associative learning might enhance the accumulation of information during a visual search for neutral faces in a rewarded task. These results provide insight into the processing linked to efficient detection of faces associated with emotional values, and the age-related changes therein

    Quantifying the efficacy of an automated facial coding software using videos of parents

    Get PDF
    IntroductionThis work explores the use of an automated facial coding software - FaceReader - as an alternative and/or complementary method to manual coding.MethodsWe used videos of parents (fathers, n = 36; mothers, n = 29) taken from the Avon Longitudinal Study of Parents and Children. The videos—obtained during real-life parent-infant interactions in the home—were coded both manually (using an existing coding scheme) and by FaceReader. We established a correspondence between the manual and automated coding categories - namely Positive, Neutral, Negative, and Surprise - before contingency tables were employed to examine the software’s detection rate and quantify the agreement between manual and automated coding. By employing binary logistic regression, we examined the predictive potential of FaceReader outputs in determining manually classified facial expressions. An interaction term was used to investigate the impact of gender on our models, seeking to estimate its influence on the predictive accuracy.ResultsWe found that the automated facial detection rate was low (25.2% for fathers, 24.6% for mothers) compared to manual coding, and discuss some potential explanations for this (e.g., poor lighting and facial occlusion). Our logistic regression analyses found that Surprise and Positive expressions had strong predictive capabilities, whilst Negative expressions performed poorly. Mothers’ faces were more important for predicting Positive and Neutral expressions, whilst fathers’ faces were more important in predicting Negative and Surprise expressions.DiscussionWe discuss the implications of our findings in the context of future automated facial coding studies, and we emphasise the need to consider gender-specific influences in automated facial coding research
    corecore