9,284 research outputs found
Affective Image Content Analysis: Two Decades Review and New Perspectives
Images can convey rich semantics and induce various emotions in viewers.
Recently, with the rapid advancement of emotional intelligence and the
explosive growth of visual data, extensive research efforts have been dedicated
to affective image content analysis (AICA). In this survey, we will
comprehensively review the development of AICA in the recent two decades,
especially focusing on the state-of-the-art methods with respect to three main
challenges -- the affective gap, perception subjectivity, and label noise and
absence. We begin with an introduction to the key emotion representation models
that have been widely employed in AICA and description of available datasets
for performing evaluation with quantitative comparison of label noise and
dataset bias. We then summarize and compare the representative approaches on
(1) emotion feature extraction, including both handcrafted and deep features,
(2) learning methods on dominant emotion recognition, personalized emotion
prediction, emotion distribution learning, and learning from noisy data or few
labels, and (3) AICA based applications. Finally, we discuss some challenges
and promising research directions in the future, such as image content and
context understanding, group emotion clustering, and viewer-image interaction.Comment: Accepted by IEEE TPAM
The Shortest Path to Happiness: Recommending Beautiful, Quiet, and Happy Routes in the City
When providing directions to a place, web and mobile mapping services are all
able to suggest the shortest route. The goal of this work is to automatically
suggest routes that are not only short but also emotionally pleasant. To
quantify the extent to which urban locations are pleasant, we use data from a
crowd-sourcing platform that shows two street scenes in London (out of
hundreds), and a user votes on which one looks more beautiful, quiet, and
happy. We consider votes from more than 3.3K individuals and translate them
into quantitative measures of location perceptions. We arrange those locations
into a graph upon which we learn pleasant routes. Based on a quantitative
validation, we find that, compared to the shortest routes, the recommended ones
add just a few extra walking minutes and are indeed perceived to be more
beautiful, quiet, and happy. To test the generality of our approach, we
consider Flickr metadata of more than 3.7M pictures in London and 1.3M in
Boston, compute proxies for the crowdsourced beauty dimension (the one for
which we have collected the most votes), and evaluate those proxies with 30
participants in London and 54 in Boston. These participants have not only rated
our recommendations but have also carefully motivated their choices, providing
insights for future work.Comment: 11 pages, 7 figures, Proceedings of ACM Hypertext 201
First impressions: A survey on vision-based apparent personality trait analysis
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.Peer ReviewedPostprint (author's final draft
Weakly supervised coupled networks for visual sentiment analysis
Automatic assessment of sentiment from visual content
has gained considerable attention with the increasing tendency
of expressing opinions on-line. In this paper, we solve
the problem of visual sentiment analysis using the high-level
abstraction in the recognition process. Existing methods
based on convolutional neural networks learn sentiment
representations from the holistic image appearance. However,
different image regions can have a different influence
on the intended expression. This paper presents a weakly
supervised coupled convolutional network with two branches
to leverage the localized information. The first branch
detects a sentiment specific soft map by training a fully convolutional
network with the cross spatial pooling strategy,
which only requires image-level labels, thereby significantly
reducing the annotation burden. The second branch utilizes
both the holistic and localized information by coupling
the sentiment map with deep features for robust classification.
We integrate the sentiment detection and classification
branches into a unified deep framework and optimize
the network in an end-to-end manner. Extensive experiments
on six benchmark datasets demonstrate that the
proposed method performs favorably against the state-ofthe-
art methods for visual sentiment analysis
PDANet: Polarity-consistent Deep Attention Network for Fine-grained Visual Emotion Regression
Existing methods on visual emotion analysis mainly focus on coarse-grained
emotion classification, i.e. assigning an image with a dominant discrete
emotion category. However, these methods cannot well reflect the complexity and
subtlety of emotions. In this paper, we study the fine-grained regression
problem of visual emotions based on convolutional neural networks (CNNs).
Specifically, we develop a Polarity-consistent Deep Attention Network (PDANet),
a novel network architecture that integrates attention into a CNN with an
emotion polarity constraint. First, we propose to incorporate both spatial and
channel-wise attentions into a CNN for visual emotion regression, which jointly
considers the local spatial connectivity patterns along each channel and the
interdependency between different channels. Second, we design a novel
regression loss, i.e. polarity-consistent regression (PCR) loss, based on the
weakly supervised emotion polarity to guide the attention generation. By
optimizing the PCR loss, PDANet can generate a polarity preserved attention map
and thus improve the emotion regression performance. Extensive experiments are
conducted on the IAPS, NAPS, and EMOTIC datasets, and the results demonstrate
that the proposed PDANet outperforms the state-of-the-art approaches by a large
margin for fine-grained visual emotion regression. Our source code is released
at: https://github.com/ZizhouJia/PDANet.Comment: Accepted by ACM Multimedia 201
Visual Affect Around the World: A Large-scale Multilingual Visual Sentiment Ontology
Every culture and language is unique. Our work expressly focuses on the
uniqueness of culture and language in relation to human affect, specifically
sentiment and emotion semantics, and how they manifest in social multimedia. We
develop sets of sentiment- and emotion-polarized visual concepts by adapting
semantic structures called adjective-noun pairs, originally introduced by Borth
et al. (2013), but in a multilingual context. We propose a new
language-dependent method for automatic discovery of these adjective-noun
constructs. We show how this pipeline can be applied on a social multimedia
platform for the creation of a large-scale multilingual visual sentiment
concept ontology (MVSO). Unlike the flat structure in Borth et al. (2013), our
unified ontology is organized hierarchically by multilingual clusters of
visually detectable nouns and subclusters of emotionally biased versions of
these nouns. In addition, we present an image-based prediction task to show how
generalizable language-specific models are in a multilingual context. A new,
publicly available dataset of >15.6K sentiment-biased visual concepts across 12
languages with language-specific detector banks, >7.36M images and their
metadata is also released.Comment: 11 pages, to appear at ACM MM'1
Modeling Group Dynamics for Personalized Robot-Mediated Interactions
The field of human-human-robot interaction (HHRI) uses social robots to
positively influence how humans interact with each other. This objective
requires models of human understanding that consider multiple humans in an
interaction as a collective entity and represent the group dynamics that exist
within it. Understanding group dynamics is important because these can
influence the behaviors, attitudes, and opinions of each individual within the
group, as well as the group as a whole. Such an understanding is also useful
when personalizing an interaction between a robot and the humans in its
environment, where a group-level model can facilitate the design of robot
behaviors that are tailored to a given group, the dynamics that exist within
it, and the specific needs and preferences of the individual interactants. In
this paper, we highlight the need for group-level models of human understanding
in human-human-robot interaction research and how these can be useful in
developing personalization techniques. We survey existing models of group
dynamics and categorize them into models of social dominance, affect, social
cohesion, and conflict resolution. We highlight the important features these
models utilize, evaluate their potential to capture interpersonal aspects of a
social interaction, and highlight their value for personalization techniques.
Finally, we identify directions for future work, and make a case for models of
relational affect as an approach that can better capture group-level
understanding of human-human interactions and be useful in personalizing
human-human-robot interactions
Impression Classification of Endek (Balinese Fabric) Image Using K-Nearest Neighbors Method
An impression can be interpreted as a psychological feeling toward a product and it plays an important role in decision making. Therefore, the understanding of the data in the domain of impressions will be very useful. This research had the objective of knowing the performance of K-Nearest Neighbors method to classify endek image impression using K-Fold Cross Validation method. The images were taken from 3 locations, namely CV. Artha Dharma, Agung Bali Collection, and Pengrajin Sri Rejeki. To get the image impression was done by consulting with an endek expert named Dr. D.A Tirta Ray, M.Si. The process of data mining was done by using K-Nearest Neighbors Method which was a classification method to a set of data based on learning data that had been classified previously and to classify new objects based on attributes and training samples. K-Fold Cross Validation testing obtained accuracy of 91% with K value in K-Nearest Neighbors of 3, 4, 7, 8
- …