31,037 research outputs found
The color of smiling: computational synaesthesia of facial expressions
This note gives a preliminary account of the transcoding or rechanneling
problem between different stimuli as it is of interest for the natural
interaction or affective computing fields. By the consideration of a simple
example, namely the color response of an affective lamp to a sensed facial
expression, we frame the problem within an information- theoretic perspective.
A full justification in terms of the Information Bottleneck principle promotes
a latent affective space, hitherto surmised as an appealing and intuitive
solution, as a suitable mediator between the different stimuli.Comment: Submitted to: 18th International Conference on Image Analysis and
Processing (ICIAP 2015), 7-11 September 2015, Genova, Ital
Affective Facial Expression Processing via Simulation: A Probabilistic Model
Understanding the mental state of other people is an important skill for
intelligent agents and robots to operate within social environments. However,
the mental processes involved in `mind-reading' are complex. One explanation of
such processes is Simulation Theory - it is supported by a large body of
neuropsychological research. Yet, determining the best computational model or
theory to use in simulation-style emotion detection, is far from being
understood.
In this work, we use Simulation Theory and neuroscience findings on
Mirror-Neuron Systems as the basis for a novel computational model, as a way to
handle affective facial expressions. The model is based on a probabilistic
mapping of observations from multiple identities onto a single fixed identity
(`internal transcoding of external stimuli'), and then onto a latent space
(`phenomenological response'). Together with the proposed architecture we
present some promising preliminary resultsComment: Annual International Conference on Biologically Inspired Cognitive
Architectures - BICA 201
RRL: A Rich Representation Language for the Description of Agent Behaviour in NECA
In this paper, we describe the Rich Representation Language (RRL) which is used in the NECA system. The NECA system generates interactions between two or more animated characters. The RRL is a formal framework for representing the information that is exchanged at the interfaces between the various NECA system modules
Cultural-based visual expression: Emotional analysis of human face via Peking Opera Painted Faces (POPF)
© 2015 The Author(s) Peking Opera as a branch of Chinese traditional cultures and arts has a very distinct colourful facial make-up for all actors in the stage performance. Such make-up is stylised in nonverbal symbolic semantics which all combined together to form the painted faces to describe and symbolise the background, the characteristic and the emotional status of specific roles. A study of Peking Opera Painted Faces (POPF) was taken as an example to see how information and meanings can be effectively expressed through the change of facial expressions based on the facial motion within natural and emotional aspects. The study found that POPF provides exaggerated features of facial motion through images, and the symbolic semantics of POPF provides a high-level expression of human facial information. The study has presented and proved a creative structure of information analysis and expression based on POPF to improve the understanding of human facial motion and emotion
Multimodal Affective Feedback: Combining Thermal, Vibrotactile, Audio and Visual Signals
In this paper we describe a demonstration of our multimodal affective
feedback designs, used in research to expand the emotional expressivity
of interfaces. The feedback leverages inherent associations
and reactions to thermal, vibrotactile, auditory and abstract
visual designs to convey a range of affective states without any
need for learning feedback encoding. All combinations of the different
feedback channels can be utilised, depending on which combination
best conveys a given state. All the signals are generated
from a mobile phone augmented with thermal and vibrotactile stimulators,
which will be available to conference visitors to see, touch,
hear and, importantly, feel
Facial emotion recognition using min-max similarity classifier
Recognition of human emotions from the imaging templates is useful in a wide
variety of human-computer interaction and intelligent systems applications.
However, the automatic recognition of facial expressions using image template
matching techniques suffer from the natural variability with facial features
and recording conditions. In spite of the progress achieved in facial emotion
recognition in recent years, the effective and computationally simple feature
selection and classification technique for emotion recognition is still an open
problem. In this paper, we propose an efficient and straightforward facial
emotion recognition algorithm to reduce the problem of inter-class pixel
mismatch during classification. The proposed method includes the application of
pixel normalization to remove intensity offsets followed-up with a Min-Max
metric in a nearest neighbor classifier that is capable of suppressing feature
outliers. The results indicate an improvement of recognition performance from
92.85% to 98.57% for the proposed Min-Max classification method when tested on
JAFFE database. The proposed emotion recognition technique outperforms the
existing template matching methods
- …