614,882 research outputs found
Ubiquitous Emotion Analytics and How We Feel Today
Emotions are complicated. Humans feel deeply, and it can be hard to bring clarity to those depths, to communicate about feelings, or to understand others’ emotional states. Indeed, this emotional confusion is one of the biggest challenges of deciphering our humanity. However, a kind of hope might be on the horizon, in the form of emotion analytics: computerized tools for recognizing and responding to emotion. This analysis explores how emotion analytics may reflect the current status of humans’ regard for emotion. Emotion need no longer be a human sense of vague, indefinable feelings; instead, emotion is in the process of becoming a legible, standardized commodity that can be sold, managed, and altered to suit the needs of those in power. Emotional autonomy and authority can be surrendered to those technologies in exchange for perceived self-determination. Emotion analytics promises a new orderliness to the messiness of human emotions, suggesting that our current state of emotional uncertainty is inadequate and intolerable
Recommended from our members
Emotion-affected decision making in human simulation
Human modelling is an interdisciplinary research field. The topic, emotion-affected decision making, was originally a cognitive psychology issue, but is now recognized as an important research direction for both computer science and biomedical modelling. The main aim of this paper is to attempt to bridge the gap between psychology and bioengineering in emotion-affected decision making. The work is based on Ortony's theory of emotions and bounded rationality theory, and attempts to connect the emotion process with decision making. A computational emotion model is proposed, and the initial framework of this model in virtual human simulation within the platform of VirtoolsTm is presented
Synthetic vision and emotion calculation in intelligent virtual human modeling
The virtual human technique already can provide vivid and believable human behaviour in more and more scenarios. Virtual humans are expected to replace real humans in hazardous situations to undertake tests and feed back valuable information. This paper will introduce a virtual human with a novel collision-based synthetic vision, short-term memory model and a capability to implement the emotion calculation and decision making. The virtual character based on this model can ‘see’ what is in his field of view (FOV) and remember those objects. After that, a group of affective computing equations have been introduced. These equations have been implemented into a proposed emotion calculation process to enlighten emotion for virtual intelligent huma
Proposing a hybrid approach for emotion classification using audio and video data
Emotion recognition has been a research topic in the field of Human-Computer Interaction (HCI) during recent years. Computers have become an inseparable part of human life. Users need human-like interaction to better communicate with computers. Many researchers have
become interested in emotion recognition and classification using different sources. A hybrid
approach of audio and text has been recently introduced. All such approaches have been done to raise the accuracy and appropriateness of emotion classification. In this study, a hybrid approach of audio and video has been applied for emotion recognition. The innovation of this
approach is selecting the characteristics of audio and video and their features as a unique specification for classification. In this research, the SVM method has been used for classifying the data in the SAVEE database. The experimental results show the maximum classification
accuracy for audio data is 91.63% while by applying the hybrid approach the accuracy achieved is 99.26%
Facial emotion recognition using min-max similarity classifier
Recognition of human emotions from the imaging templates is useful in a wide
variety of human-computer interaction and intelligent systems applications.
However, the automatic recognition of facial expressions using image template
matching techniques suffer from the natural variability with facial features
and recording conditions. In spite of the progress achieved in facial emotion
recognition in recent years, the effective and computationally simple feature
selection and classification technique for emotion recognition is still an open
problem. In this paper, we propose an efficient and straightforward facial
emotion recognition algorithm to reduce the problem of inter-class pixel
mismatch during classification. The proposed method includes the application of
pixel normalization to remove intensity offsets followed-up with a Min-Max
metric in a nearest neighbor classifier that is capable of suppressing feature
outliers. The results indicate an improvement of recognition performance from
92.85% to 98.57% for the proposed Min-Max classification method when tested on
JAFFE database. The proposed emotion recognition technique outperforms the
existing template matching methods
Suppressing sensorimotor activity modulates the discrimination of auditory emotions but not speaker identity
Our ability to recognize the emotions of others is a crucial feature of human social cognition. Functional neuroimaging studies indicate that activity in sensorimotor cortices is evoked during the perception of emotion. In the visual domain, right somatosensory cortex activity has been shown to be critical for facial emotion recognition. However, the importance of sensorimotor representations in modalities outside of vision remains unknown. Here we use continuous theta-burst transcranial magnetic stimulation (cTBS) to investigate whether neural activity in the right postcentral gyrus (rPoG) and right lateral premotor cortex (rPM) is involved in nonverbal auditory emotion recognition. Three groups of participants completed same-different tasks on auditory stimuli, discriminating between the emotion expressed and the speakers' identities, before and following cTBS targeted at rPoG, rPM, or the vertex (control site). A task-selective deficit in auditory emotion discrimination was observed. Stimulation to rPoG and rPM resulted in a disruption of participants' abilities to discriminate emotion, but not identity, from vocal signals. These findings suggest that sensorimotor activity may be a modality-independent mechanism which aids emotion discrimination. Copyright © 2010 the authors
- …