117,908 research outputs found
Sympathy Begins with a Smile, Intelligence Begins with a Word: Use of Multimodal Features in Spoken Human-Robot Interaction
Recognition of social signals, from human facial expressions or prosody of
speech, is a popular research topic in human-robot interaction studies. There
is also a long line of research in the spoken dialogue community that
investigates user satisfaction in relation to dialogue characteristics.
However, very little research relates a combination of multimodal social
signals and language features detected during spoken face-to-face human-robot
interaction to the resulting user perception of a robot. In this paper we show
how different emotional facial expressions of human users, in combination with
prosodic characteristics of human speech and features of human-robot dialogue,
correlate with users' impressions of the robot after a conversation. We find
that happiness in the user's recognised facial expression strongly correlates
with likeability of a robot, while dialogue-related features (such as number of
human turns or number of sentences per robot utterance) correlate with
perceiving a robot as intelligent. In addition, we show that facial expression,
emotional features, and prosody are better predictors of human ratings related
to perceived robot likeability and anthropomorphism, while linguistic and
non-linguistic features more often predict perceived robot intelligence and
interpretability. As such, these characteristics may in future be used as an
online reward signal for in-situ Reinforcement Learning based adaptive
human-robot dialogue systems.Comment: Robo-NLP workshop at ACL 2017. 9 pages, 5 figures, 6 table
Social interactions, emotion and sleep: a systematic review and research agenda
Sleep and emotion are closely linked, however the effects of sleep on socio-emotional task performance have only recently been investigated. Sleep loss and insomnia have been found to affect emotional reactivity and social functioning, although results, taken together, are somewhat contradictory. Here we review this advancing literature, aiming to 1) systematically review the relevant literature on sleep and socio-emotional functioning, with reference to the extant literature on emotion and social interactions, 2) summarize results and outline ways in which emotion, social interactions, and sleep may interact, and 3) suggest key limitations and future directions for this field. From the reviewed literature, sleep deprivation is associated with diminished emotional expressivity and impaired emotion recognition, and this has particular relevance for social interactions. Sleep deprivation also increases emotional reactivity; results which are most apparent with neuro-imaging studies investigating amygdala activity and its prefrontal regulation. Evidence of emotional dysregulation in insomnia and poor sleep has also been reported. In general, limitations of this literature include how performance measures are linked to self-reports, and how results are linked to socio-emotional functioning. We conclude by suggesting some possible future directions for this field
The analysis of facial beauty: an emerging area of research in pattern analysis
Much research presented recently supports the idea that the human perception of attractiveness is data-driven and largely irrespective of the perceiver. This suggests using pattern analysis techniques for beauty analysis. Several scientific papers on this subject are appearing in image processing, computer vision and pattern analysis contexts, or use techniques of these areas. In this paper, we will survey the recent studies on automatic analysis of facial beauty, and discuss research lines and practical application
A New 3D Tool for Planning Plastic Surgery
Face plastic surgery (PS) plays a major role in today medicine. Both for reconstructive and cosmetic surgery, achieving harmony of facial features is an important, if not the major goal. Several systems have been proposed for presenting to patient and surgeon possible outcomes of the surgical procedure. In this paper, we present a new 3D system able to automatically suggest, for selected facial features as nose, chin, etc, shapes that aesthetically match the patient's face. The basic idea is suggesting shape changes aimed to approach similar but more harmonious faces. To this goal, our system compares the 3D scan of the patient with a database of scans of harmonious faces, excluding the feature to be corrected. Then, the corresponding features of the k most similar harmonious faces, as well as their average, are suitably pasted onto the patient's face, producing k+1 aesthetically effective surgery simulations. The system has been fully implemented and tested. To demonstrate the system, a 3D database of harmonious faces has been collected and a number of PS treatments have been simulated. The ratings of the outcomes of the simulations, provided by panels of human judges, show that the system and the underlying idea are effectiv
Unfamiliar voice identification: effect of post-event information on accuracy and voice ratings
This study addressed the effect of misleading post-event information (PEI) on voice ratings, identification accuracy, and confidence, as well as the link between verbal recall and accuracy. Participants listened to a dialogue between male and female targets, then read misleading information about voice pitch. Participants engaged in verbal recall, rated voices on a feature checklist, and made a lineup decision. Accuracy rates were low, especially on target-absent lineups. Confidence and accuracy were unrelated, but the number of facts recalled about the voice predicted later lineup accuracy. There was a main effect of misinformation on ratings of target voice pitch, but there was no effect on identification accuracy or confidence ratings. As voice lineup evidence from earwitnesses is used in courts, the findings have potential applied relevance
Considerations for believable emotional facial expression animation
Facial expressions can be used to communicate emotional states through the use of universal signifiers within key regions of the face. Psychology research has identified what these signifiers are and how different combinations and variations can be interpreted. Research into expressions has informed animation practice, but as yet very little is known about the movement within and between emotional expressions. A better understanding of sequence, timing, and duration could better inform the production of believable animation. This paper introduces the idea of expression choreography, and how tests of observer perception might enhance our understanding of moving emotional expressions
Personalized Cinemagraphs using Semantic Understanding and Collaborative Learning
Cinemagraphs are a compelling way to convey dynamic aspects of a scene. In
these media, dynamic and still elements are juxtaposed to create an artistic
and narrative experience. Creating a high-quality, aesthetically pleasing
cinemagraph requires isolating objects in a semantically meaningful way and
then selecting good start times and looping periods for those objects to
minimize visual artifacts (such a tearing). To achieve this, we present a new
technique that uses object recognition and semantic segmentation as part of an
optimization method to automatically create cinemagraphs from videos that are
both visually appealing and semantically meaningful. Given a scene with
multiple objects, there are many cinemagraphs one could create. Our method
evaluates these multiple candidates and presents the best one, as determined by
a model trained to predict human preferences in a collaborative way. We
demonstrate the effectiveness of our approach with multiple results and a user
study.Comment: To appear in ICCV 2017. Total 17 pages including the supplementary
materia
- …