6,675 research outputs found
Recommended from our members
Tracking the affective state of unseen persons.
Emotion recognition is an essential human ability critical for social functioning. It is widely assumed that identifying facial expression is the key to this, and models of emotion recognition have mainly focused on facial and bodily features in static, unnatural conditions. We developed a method called affective tracking to reveal and quantify the enormous contribution of visual context to affect (valence and arousal) perception. When characters' faces and bodies were masked in silent videos, viewers inferred the affect of the invisible characters successfully and in high agreement based solely on visual context. We further show that the context is not only sufficient but also necessary to accurately perceive human affect over time, as it provides a substantial and unique contribution beyond the information available from face and body. Our method (which we have made publicly available) reveals that emotion recognition is, at its heart, an issue of context as much as it is about faces
Automatic Understanding of Image and Video Advertisements
There is more to images than their objective physical content: for example,
advertisements are created to persuade a viewer to take a certain action. We
propose the novel problem of automatic advertisement understanding. To enable
research on this problem, we create two datasets: an image dataset of 64,832
image ads, and a video dataset of 3,477 ads. Our data contains rich annotations
encompassing the topic and sentiment of the ads, questions and answers
describing what actions the viewer is prompted to take and the reasoning that
the ad presents to persuade the viewer ("What should I do according to this ad,
and why should I do it?"), and symbolic references ads make (e.g. a dove
symbolizes peace). We also analyze the most common persuasive strategies ads
use, and the capabilities that computer vision systems should have to
understand these strategies. We present baseline classification results for
several prediction tasks, including automatically answering questions about the
messages of the ads.Comment: To appear in CVPR 2017; data available on
http://cs.pitt.edu/~kovashka/ad
Machine Analysis of Facial Expressions
No abstract
- …