5,472 research outputs found
Affect Recognition in Ads with Application to Computational Advertising
Advertisements (ads) often include strongly emotional content to leave a
lasting impression on the viewer. This work (i) compiles an affective ad
dataset capable of evoking coherent emotions across users, as determined from
the affective opinions of five experts and 14 annotators; (ii) explores the
efficacy of convolutional neural network (CNN) features for encoding emotions,
and observes that CNN features outperform low-level audio-visual emotion
descriptors upon extensive experimentation; and (iii) demonstrates how enhanced
affect prediction facilitates computational advertising, and leads to better
viewing experience while watching an online video stream embedded with ads
based on a study involving 17 users. We model ad emotions based on subjective
human opinions as well as objective multimodal features, and show how
effectively modeling ad emotions can positively impact a real-life application.Comment: Accepted at the ACM International Conference on Multimedia (ACM MM)
201
Looking Beyond a Clever Narrative: Visual Context and Attention are Primary Drivers of Affect in Video Advertisements
Emotion evoked by an advertisement plays a key role in influencing brand
recall and eventual consumer choices. Automatic ad affect recognition has
several useful applications. However, the use of content-based feature
representations does not give insights into how affect is modulated by aspects
such as the ad scene setting, salient object attributes and their interactions.
Neither do such approaches inform us on how humans prioritize visual
information for ad understanding. Our work addresses these lacunae by
decomposing video content into detected objects, coarse scene structure, object
statistics and actively attended objects identified via eye-gaze. We measure
the importance of each of these information channels by systematically
incorporating related information into ad affect prediction models. Contrary to
the popular notion that ad affect hinges on the narrative and the clever use of
linguistic and social cues, we find that actively attended objects and the
coarse scene structure better encode affective information as compared to
individual scene objects or conspicuous background elements.Comment: Accepted for publication in the Proceedings of 20th ACM International
Conference on Multimodal Interaction, Boulder, CO, US
Personality in Computational Advertising: A Benchmark
In the last decade, new ways of shopping online have increased the
possibility of buying products and services more easily and faster
than ever. In this new context, personality is a key determinant
in the decision making of the consumer when shopping. A person’s
buying choices are influenced by psychological factors like
impulsiveness; indeed some consumers may be more susceptible
to making impulse purchases than others. Since affective metadata
are more closely related to the user’s experience than generic
parameters, accurate predictions reveal important aspects of user’s
attitudes, social life, including attitude of others and social identity.
This work proposes a highly innovative research that uses a personality
perspective to determine the unique associations among the
consumer’s buying tendency and advert recommendations. In fact,
the lack of a publicly available benchmark for computational advertising
do not allow both the exploration of this intriguing research
direction and the evaluation of recent algorithms. We present the
ADS Dataset, a publicly available benchmark consisting of 300 real
advertisements (i.e., Rich Media Ads, Image Ads, Text Ads) rated
by 120 unacquainted individuals, enriched with Big-Five users’
personality factors and 1,200 personal users’ pictures
CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines
Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective.
The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines.
From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research
Both Facts and Feelings: Emotion and News Literacy
News literacy education has long focused on the significance of facts, sourcing, and verifiability. While these are critical aspects of news, rapidly developing emotion analytics technologies intended to respond to and even alter digital news audiences’ emotions also demand that we pay greater attention to the role of emotion in news consumption. This essay explores the role of emotion in the “fake news” phenomenon and the implementation of emotion analytics tools in news distribution. I examine the function of emotion in news consumption and the status of emotion within existing news literacy training programs. Finally, I offer suggestions for addressing emotional responses to news with students, including both mindfulness techniques and psychological research on thinking processes
Recommended from our members
Tackling food marketing to children in a digital world: trans-disciplinary perspectives. Children’s rights, evidence of impact, methodological challenges, regulatory options and policy implications for the WHO European Region
There is unequivocal evidence that childhood obesity is influenced by marketing of foods and non-alcoholic beverages high in saturated fat, salt and/or free sugars (HFSS), and a core recommendation of the WHO Commission on Ending Childhood Obesity is to reduce children’s exposure to all such marketing. As a result, WHO has called on Member States to introduce restrictions on marketing of HFSS foods to children, covering all media, including digital, and to close any regulatory loopholes. This publication provides up-to-date information on the marketing of foods and non-alcoholic beverages to children and the changes that have occurred in recent years, focusing in particular on the major shift to digital marketing. It examines trends in media use among children, marketing methods in the new digital media landscape and children’s engagement with such marketing. It also considers the impact on children and their ability to counter marketing as well as the implications for children’s rights and digital privacy. Finally the report discusses the policy implications and some of the recent policy action by WHO European Member States
Conversion Prediction Using Multi-task Conditional Attention Networks to Support the Creation of Effective Ad Creative
Accurately predicting conversions in advertisements is generally a
challenging task, because such conversions do not occur frequently. In this
paper, we propose a new framework to support creating high-performing ad
creatives, including the accurate prediction of ad creative text conversions
before delivering to the consumer. The proposed framework includes three key
ideas: multi-task learning, conditional attention, and attention highlighting.
Multi-task learning is an idea for improving the prediction accuracy of
conversion, which predicts clicks and conversions simultaneously, to solve the
difficulty of data imbalance. Furthermore, conditional attention focuses
attention of each ad creative with the consideration of its genre and target
gender, thus improving conversion prediction accuracy. Attention highlighting
visualizes important words and/or phrases based on conditional attention. We
evaluated the proposed framework with actual delivery history data (14,000
creatives displayed more than a certain number of times from Gunosy Inc.), and
confirmed that these ideas improve the prediction performance of conversions,
and visualize noteworthy words according to the creatives' attributes.Comment: 9 pages, 6 figures. Accepted at The 25th ACM SIGKDD Conference on
Knowledge Discovery and Data Mining (KDD 2019) as an applied data science
pape
SAVOIAS: A Diverse, Multi-Category Visual Complexity Dataset
Visual complexity identifies the level of intricacy and details in an image
or the level of difficulty to describe the image. It is an important concept in
a variety of areas such as cognitive psychology, computer vision and
visualization, and advertisement. Yet, efforts to create large, downloadable
image datasets with diverse content and unbiased groundtruthing are lacking. In
this work, we introduce Savoias, a visual complexity dataset that compromises
of more than 1,400 images from seven image categories relevant to the above
research areas, namely Scenes, Advertisements, Visualization and infographics,
Objects, Interior design, Art, and Suprematism. The images in each category
portray diverse characteristics including various low-level and high-level
features, objects, backgrounds, textures and patterns, text, and graphics. The
ground truth for Savoias is obtained by crowdsourcing more than 37,000 pairwise
comparisons of images using the forced-choice methodology and with more than
1,600 contributors. The resulting relative scores are then converted to
absolute visual complexity scores using the Bradley-Terry method and matrix
completion. When applying five state-of-the-art algorithms to analyze the
visual complexity of the images in the Savoias dataset, we found that the
scores obtained from these baseline tools only correlate well with crowdsourced
labels for abstract patterns in the Suprematism category (Pearson correlation
r=0.84). For the other categories, in particular, the objects and advertisement
categories, low correlation coefficients were revealed (r=0.3 and 0.56,
respectively). These findings suggest that (1) state-of-the-art approaches are
mostly insufficient and (2) Savoias enables category-specific method
development, which is likely to improve the impact of visual complexity
analysis on specific application areas, including computer vision.Comment: 10 pages, 4 figures, 4 table
- …