22,708 research outputs found
Automatic Understanding of Image and Video Advertisements
There is more to images than their objective physical content: for example,
advertisements are created to persuade a viewer to take a certain action. We
propose the novel problem of automatic advertisement understanding. To enable
research on this problem, we create two datasets: an image dataset of 64,832
image ads, and a video dataset of 3,477 ads. Our data contains rich annotations
encompassing the topic and sentiment of the ads, questions and answers
describing what actions the viewer is prompted to take and the reasoning that
the ad presents to persuade the viewer ("What should I do according to this ad,
and why should I do it?"), and symbolic references ads make (e.g. a dove
symbolizes peace). We also analyze the most common persuasive strategies ads
use, and the capabilities that computer vision systems should have to
understand these strategies. We present baseline classification results for
several prediction tasks, including automatically answering questions about the
messages of the ads.Comment: To appear in CVPR 2017; data available on
http://cs.pitt.edu/~kovashka/ad
Looking Beyond a Clever Narrative: Visual Context and Attention are Primary Drivers of Affect in Video Advertisements
Emotion evoked by an advertisement plays a key role in influencing brand
recall and eventual consumer choices. Automatic ad affect recognition has
several useful applications. However, the use of content-based feature
representations does not give insights into how affect is modulated by aspects
such as the ad scene setting, salient object attributes and their interactions.
Neither do such approaches inform us on how humans prioritize visual
information for ad understanding. Our work addresses these lacunae by
decomposing video content into detected objects, coarse scene structure, object
statistics and actively attended objects identified via eye-gaze. We measure
the importance of each of these information channels by systematically
incorporating related information into ad affect prediction models. Contrary to
the popular notion that ad affect hinges on the narrative and the clever use of
linguistic and social cues, we find that actively attended objects and the
coarse scene structure better encode affective information as compared to
individual scene objects or conspicuous background elements.Comment: Accepted for publication in the Proceedings of 20th ACM International
Conference on Multimodal Interaction, Boulder, CO, US
Recommended from our members
Zapping index: Using smile to measure advertisement zapping likelihood
In marketing and advertising research, 'zapping' is defined as the action when a viewer stops watching a commercial. Researchers analyze users' behavior in order to prevent zapping which helps advertisers to design effective commercials. Since emotions can be used to engage consumers, in this paper, we leverage automated facial expression analysis to understand consumers' zapping behavior. Firstly, we provide an accurate moment-to-moment smile detection algorithm. Secondly, we formulate a binary classification problem (zapping/non-zapping) based on real-world scenarios, and adopt smile response as the feature to predict zapping. Thirdly, to cope with the lack of a metric in advertising evaluation, we propose a new metric called Zapping Index (ZI). ZI is a moment-to-moment measurement of a user's zapping probability. It gauges not only the reaction of a user, but also the preference of a user to commercials. Finally, extensive experiments are performed to provide insights and we make recommendations that will be useful to both advertisers and advertisement publishers
Unsupervised Text Extraction from G-Maps
This paper represents an text extraction method from Google maps, GIS
maps/images. Due to an unsupervised approach there is no requirement of any
prior knowledge or training set about the textual and non-textual parts. Fuzzy
CMeans clustering technique is used for image segmentation and Prewitt method
is used to detect the edges. Connected component analysis and gridding
technique enhance the correctness of the results. The proposed method reaches
98.5% accuracy level on the basis of experimental data sets.Comment: Proc. IEEE Conf. #30853, International Conference on Human Computer
Interactions (ICHCI'13), Chennai, India, 23-24 Aug., 201
Automatic Annotation of Images from the Practitioner Perspective
This paper describes an ongoing project which seeks to contribute to a wider understanding of the realities of bridging the semantic gap in visual image retrieval. A comprehensive survey of the means by which real image retrieval transactions are realised is being undertaken. An image taxonomy has been developed, in order to provide a framework within which account may be taken of the plurality of image types, user needs and forms of textual metadata. Significant limitations exhibited by current automatic annotation techniques are discussed, and a possible way forward using ontologically supported automatic content annotation is briefly considered as a potential means of mitigating these limitations
- …