2,317 research outputs found

    Evaluating Content-centric vs User-centric Ad Affect Recognition

    Get PDF
    Despite the fact that advertisements (ads) often include strongly emotional content, very little work has been devoted to affect recognition (AR) from ads. This work explicitly compares content-centric and user-centric ad AR methodologies, and evaluates the impact of enhanced AR on computational advertising via a user study. Specifically, we (1) compile an affective ad dataset capable of evoking coherent emotions across users; (2) explore the efficacy of content-centric convolutional neural network (CNN) features for encoding emotions, and show that CNN features outperform low-level emotion descriptors; (3) examine user-centered ad AR by analyzing Electroencephalogram (EEG) responses acquired from eleven viewers, and find that EEG signals encode emotional information better than content descriptors; (4) investigate the relationship between objective AR and subjective viewer experience while watching an ad-embedded online video stream based on a study involving 12 users. To our knowledge, this is the first work to (a) expressly compare user vs content-centered AR for ads, and (b) study the relationship between modeling of ad emotions and its impact on a real-life advertising application.Comment: Accepted at the ACM International Conference on Multimodal Interation (ICMI) 201

    3D fatigue from stereoscopic 3D video displays: Comparing objective and subjective tests using electroencephalography

    Get PDF
    The use of stereoscopic display has increased in recent times, with a growing range of applications using 3D videos for visual entertainment, data visualization, and medical applications. However, stereoscopic 3D video can lead to adverse reactions amongst some viewers, including visual fatigue, headache and nausea; such reactions can further lead to Visually Induced Motion Sickness (VIMS). Whilst motion sickness symptoms can occur from other types of visual displays, this paper investigates the rapid adjustment triggered by human pupils as a potential cause of 3D fatigue due to VIMS from stereoscopic 3D displays. Using Electroencephalogram (EEG) biosignals and eye blink tools to measure the 3D fatigue, a series of objective and subjective experiments were conducted to investigate the effect of stereoscopic 3D across a series of video sequences

    Affect Recognition in Ads with Application to Computational Advertising

    Get PDF
    Advertisements (ads) often include strongly emotional content to leave a lasting impression on the viewer. This work (i) compiles an affective ad dataset capable of evoking coherent emotions across users, as determined from the affective opinions of five experts and 14 annotators; (ii) explores the efficacy of convolutional neural network (CNN) features for encoding emotions, and observes that CNN features outperform low-level audio-visual emotion descriptors upon extensive experimentation; and (iii) demonstrates how enhanced affect prediction facilitates computational advertising, and leads to better viewing experience while watching an online video stream embedded with ads based on a study involving 17 users. We model ad emotions based on subjective human opinions as well as objective multimodal features, and show how effectively modeling ad emotions can positively impact a real-life application.Comment: Accepted at the ACM International Conference on Multimedia (ACM MM) 201

    Fear Feedback Loop: Creative and Dynamic Fear Experiences Driven by User Emotion

    Get PDF
    This thesis examines whether it is possible to generate fear-eliciting media that custom fits to the user. The system described uses a genetic algorithm to produce images that get more scary through the generations in reaction to either physiological signals obtained from the user or a user-provided fear rating. The system was able to detect differing levels of fear using a regression trained on EEG and heart rate data gathered while users view clips from horror movies. It was also found to produce images with significantly higher fear ratings at the fifth generation as compared to the first generation. These higher scoring images were found to be unique between subjects

    Emotion Recognition With Temporarily Localized 'Emotional Events' in Naturalistic Context

    Full text link
    Emotion recognition using EEG signals is an emerging area of research due to its broad applicability in BCI. Emotional feelings are hard to stimulate in the lab. Emotions do not last long, yet they need enough context to be perceived and felt. However, most EEG-related emotion databases either suffer from emotionally irrelevant details (due to prolonged duration stimulus) or have minimal context doubting the feeling of any emotion using the stimulus. We tried to reduce the impact of this trade-off by designing an experiment in which participants are free to report their emotional feelings simultaneously watching the emotional stimulus. We called these reported emotional feelings "Emotional Events" in our Dataset on Emotion with Naturalistic Stimuli (DENS). We used EEG signals to classify emotional events on different combinations of Valence(V) and Arousal(A) dimensions and compared the results with benchmark datasets of DEAP and SEED. STFT is used for feature extraction and used in the classification model consisting of CNN-LSTM hybrid layers. We achieved significantly higher accuracy with our data compared to DEEP and SEED data. We conclude that having precise information about emotional feelings improves the classification accuracy compared to long-duration EEG signals which might be contaminated by mind-wandering
    corecore