17,967 research outputs found
Recommended from our members
Tracking the affective state of unseen persons.
Emotion recognition is an essential human ability critical for social functioning. It is widely assumed that identifying facial expression is the key to this, and models of emotion recognition have mainly focused on facial and bodily features in static, unnatural conditions. We developed a method called affective tracking to reveal and quantify the enormous contribution of visual context to affect (valence and arousal) perception. When characters' faces and bodies were masked in silent videos, viewers inferred the affect of the invisible characters successfully and in high agreement based solely on visual context. We further show that the context is not only sufficient but also necessary to accurately perceive human affect over time, as it provides a substantial and unique contribution beyond the information available from face and body. Our method (which we have made publicly available) reveals that emotion recognition is, at its heart, an issue of context as much as it is about faces
Recommended from our members
Emphatic agents to reduce user frustration: The effects of varying agent characteristics
There is now growing interest in the development of computer systems which respond to usersā emotion and affect. We report three small scale studies (with a total of 42 participants) which investigate the extent to which affective agents, using strategies derived from human-human interaction, can reduce user frustration within human-computer interaction. The results confirm the previous findings of Klein et al (2002) that such interventions can be effective. We also obtained results that suggest that embodied agents can be more effective at reducing frustration than non-embodied agents, and that female embodied agents may be more effective than male embodied agents. These results are discussed in light of the existing research literature
Affect Recognition in Ads with Application to Computational Advertising
Advertisements (ads) often include strongly emotional content to leave a
lasting impression on the viewer. This work (i) compiles an affective ad
dataset capable of evoking coherent emotions across users, as determined from
the affective opinions of five experts and 14 annotators; (ii) explores the
efficacy of convolutional neural network (CNN) features for encoding emotions,
and observes that CNN features outperform low-level audio-visual emotion
descriptors upon extensive experimentation; and (iii) demonstrates how enhanced
affect prediction facilitates computational advertising, and leads to better
viewing experience while watching an online video stream embedded with ads
based on a study involving 17 users. We model ad emotions based on subjective
human opinions as well as objective multimodal features, and show how
effectively modeling ad emotions can positively impact a real-life application.Comment: Accepted at the ACM International Conference on Multimedia (ACM MM)
201
Automated annotation of multimedia audio data with affective labels for information management
The emergence of digital multimedia systems is creating many new opportunities for rapid access to huge content archives. In order to fully exploit these information sources, the content must be annotated with significant features. An important aspect of human interpretation of multimedia data, which is often overlooked, is the affective dimension. Such information is a potentially useful component for content-based classification and retrieval. Much of the affective information of multimedia content is contained within the audio data stream. Emotional
features can be defined in terms of arousal and valence levels. In this study low-level audio features are extracted to calculate arousal and valence levels of
multimedia audio streams. These are then mapped onto a set of keywords with predetermined emotional interpretations. Experimental results illustrate the use of this system to assign affective annotation to multimedia data
Multimodal Content Analysis for Effective Advertisements on YouTube
The rapid advances in e-commerce and Web 2.0 technologies have greatly
increased the impact of commercial advertisements on the general public. As a
key enabling technology, a multitude of recommender systems exists which
analyzes user features and browsing patterns to recommend appealing
advertisements to users. In this work, we seek to study the characteristics or
attributes that characterize an effective advertisement and recommend a useful
set of features to aid the designing and production processes of commercial
advertisements. We analyze the temporal patterns from multimedia content of
advertisement videos including auditory, visual and textual components, and
study their individual roles and synergies in the success of an advertisement.
The objective of this work is then to measure the effectiveness of an
advertisement, and to recommend a useful set of features to advertisement
designers to make it more successful and approachable to users. Our proposed
framework employs the signal processing technique of cross modality feature
learning where data streams from different components are employed to train
separate neural network models and are then fused together to learn a shared
representation. Subsequently, a neural network model trained on this joint
feature embedding representation is utilized as a classifier to predict
advertisement effectiveness. We validate our approach using subjective ratings
from a dedicated user study, the sentiment strength of online viewer comments,
and a viewer opinion metric of the ratio of the Likes and Views received by
each advertisement from an online platform.Comment: 11 pages, 5 figures, ICDM 201
Gender differences in liking and wanting sex: examining the role of motivational context and implicit versus explicit processing
The present study investigated the specificity of sexual appraisal processes by making a distinction between implicit and explicit appraisals and between the affective (liking) and motivational (wanting) valence of sexual stimuli. These appraisals are assumed to diverge between men and women, depending on the context in which the sexual stimulus is encountered. Using an Implicit Association Test, explicit ratings, and film clips to prime a sexual, romantic or neutral motivational context, we investigated whether liking and wanting of sexual stimuli differed at the implicit and explicit level, differed between men and women, and were differentially sensitive to context manipulations. Results showed that, at the implicit level, women wanted more sex after being primed with romantic mood whereas men showed the least wanting of sex in the romantic condition. At the explicit level, men reported greater liking and wanting of sex than women, independently of context. We also found that women's (self-reported) sexual behavior was best predicted by the incentive salience of sexual stimuli whereas men's sexual behavior was more closely related to the hedonic qualities of sexual stimuli. Results were discussed in relation to an emotion-motivational account of sexual functioning
- ā¦