11,594 research outputs found
Looking Beyond a Clever Narrative: Visual Context and Attention are Primary Drivers of Affect in Video Advertisements
Emotion evoked by an advertisement plays a key role in influencing brand
recall and eventual consumer choices. Automatic ad affect recognition has
several useful applications. However, the use of content-based feature
representations does not give insights into how affect is modulated by aspects
such as the ad scene setting, salient object attributes and their interactions.
Neither do such approaches inform us on how humans prioritize visual
information for ad understanding. Our work addresses these lacunae by
decomposing video content into detected objects, coarse scene structure, object
statistics and actively attended objects identified via eye-gaze. We measure
the importance of each of these information channels by systematically
incorporating related information into ad affect prediction models. Contrary to
the popular notion that ad affect hinges on the narrative and the clever use of
linguistic and social cues, we find that actively attended objects and the
coarse scene structure better encode affective information as compared to
individual scene objects or conspicuous background elements.Comment: Accepted for publication in the Proceedings of 20th ACM International
Conference on Multimodal Interaction, Boulder, CO, US
Recommended from our members
Information acquisition using eye-gaze tracking for person-following with mobile robots
In the effort of developing natural means for human-robot interaction (HRI), signifcant amount of research has been focusing on Person-Following (PF) for mobile robots. PF, which generally consists of detecting, recognizing and following people, is believed to be one of the required functionalities for most future robots that share their environments with their human companions. Research in this field is mostly directed towards fully automating this functionality, which makes the challenge even more tedious. Focusing on this challenge leads research to divert from other challenges that coexist in any PF system. A natural PF functionality consists of a number of tasks that are required to be implemented in the system. However, in more realistic life scenarios, not all the tasks required for PF need to be automated. Instead, some of these tasks can be operated by human operators and therefore require natural means of interaction and information acquisition. In order to highlight all the tasks that are believed to exist in any PF system, this paper introduces a novel taxonomy for PF. Also, in order to provide a natural means for HRI, TeleGaze is used for information acquisition in the implementation of the taxonomy. TeleGaze was previously developed by the authors as a means of natural HRI for teleoperation through eye-gaze tracking. Using TeleGaze in the aid of developing PF systems is believed to show the feasibility of achieving a realistic information acquisition in a natural way
Unobtrusive and pervasive video-based eye-gaze tracking
Eye-gaze tracking has long been considered a desktop technology that finds its use inside the traditional office setting, where the operating conditions may be controlled. Nonetheless, recent advancements in mobile technology and a growing interest in capturing natural human behaviour have motivated an emerging interest in tracking eye movements within unconstrained real-life conditions, referred to as pervasive eye-gaze tracking. This critical review focuses on emerging passive and unobtrusive video-based eye-gaze tracking methods in recent literature, with the aim to identify different research avenues that are being followed in response to the challenges of pervasive eye-gaze tracking. Different eye-gaze tracking approaches are discussed in order to bring out their strengths and weaknesses, and to identify any limitations, within the context of pervasive eye-gaze tracking, that have yet to be considered by the computer vision community.peer-reviewe
Speech-Gesture Mapping and Engagement Evaluation in Human Robot Interaction
A robot needs contextual awareness, effective speech production and
complementing non-verbal gestures for successful communication in society. In
this paper, we present our end-to-end system that tries to enhance the
effectiveness of non-verbal gestures. For achieving this, we identified
prominently used gestures in performances by TED speakers and mapped them to
their corresponding speech context and modulated speech based upon the
attention of the listener. The proposed method utilized Convolutional Pose
Machine [4] to detect the human gesture. Dominant gestures of TED speakers were
used for learning the gesture-to-speech mapping. The speeches by them were used
for training the model. We also evaluated the engagement of the robot with
people by conducting a social survey. The effectiveness of the performance was
monitored by the robot and it self-improvised its speech pattern on the basis
of the attention level of the audience, which was calculated using visual
feedback from the camera. The effectiveness of interaction as well as the
decisions made during improvisation was further evaluated based on the
head-pose detection and interaction survey.Comment: 8 pages, 9 figures, Under review in IRC 201
- …