38,697 research outputs found
High-Level Concepts for Affective Understanding of Images
This paper aims to bridge the affective gap between image content and the
emotional response of the viewer it elicits by using High-Level Concepts
(HLCs). In contrast to previous work that relied solely on low-level features
or used convolutional neural network (CNN) as a black-box, we use HLCs
generated by pretrained CNNs in an explicit way to investigate the
relations/associations between these HLCs and a (small) set of Ekman's
emotional classes. As a proof-of-concept, we first propose a linear admixture
model for modeling these relations, and the resulting computational framework
allows us to determine the associations between each emotion class and certain
HLCs (objects and places). This linear model is further extended to a nonlinear
model using support vector regression (SVR) that aims to predict the viewer's
emotional response using both low-level image features and HLCs extracted from
images. These class-specific regressors are then assembled into a regressor
ensemble that provide a flexible and effective predictor for predicting
viewer's emotional responses from images. Experimental results have
demonstrated that our results are comparable to existing methods, with a clear
view of the association between HLCs and emotional classes that is ostensibly
missing in most existing work
Diving Deep into Sentiment: Understanding Fine-tuned CNNs for Visual Sentiment Prediction
Visual media are powerful means of expressing emotions and sentiments. The
constant generation of new content in social networks highlights the need of
automated visual sentiment analysis tools. While Convolutional Neural Networks
(CNNs) have established a new state-of-the-art in several vision problems,
their application to the task of sentiment analysis is mostly unexplored and
there are few studies regarding how to design CNNs for this purpose. In this
work, we study the suitability of fine-tuning a CNN for visual sentiment
prediction as well as explore performance boosting techniques within this deep
learning setting. Finally, we provide a deep-dive analysis into a benchmark,
state-of-the-art network architecture to gain insight about how to design
patterns for CNNs on the task of visual sentiment prediction.Comment: Preprint of the paper accepted at the 1st Workshop on Affect and
Sentiment in Multimedia (ASM), in ACM MultiMedia 2015. Brisbane, Australi
Affective Sustainability. The Creation and Transmission of Affect through an Educative Process: An Instrument for the Construction of more Sustainable Citizens
Although for many years the debate on sustainability has focused on the generation of
critical thinking based on the dynamic balance between the economic, social and environmental
spheres, in the following text we propose to elaborate on the use of a eminently human condition,
such as the capacity to love and create an emotional attachment, whether with our environment or
our fellow men, as an initiator and main force for change to the building a more sustainable model
of development. To do so we shall begin from the concept coined by Adriana Bisquert in the 90s,
that is A ective sustainability, by analyzing it, delving into its possible definitions by means of the
development of the project for Environmental Education and Development called “Educating for a
more sustainable citizenship” undertaken by the Spanish NGO (non-governmental organization) or
ITACA Ambiente Elegido, and developed in the locality of Paterna de Rivera, Cádiz (Spain). This is a
practical and real example, which is used to establish a work educational methodology that enables
us to consider this concept as the real basis for an exportable and replicable work in a painstaking
search for the creation of a more sustainable city
How to Make an Image More Memorable? A Deep Style Transfer Approach
Recent works have shown that it is possible to automatically predict
intrinsic image properties like memorability. In this paper, we take a step
forward addressing the question: "Can we make an image more memorable?".
Methods for automatically increasing image memorability would have an impact in
many application fields like education, gaming or advertising. Our work is
inspired by the popular editing-by-applying-filters paradigm adopted in photo
editing applications, like Instagram and Prisma. In this context, the problem
of increasing image memorability maps to that of retrieving "memorabilizing"
filters or style "seeds". Still, users generally have to go through most of the
available filters before finding the desired solution, thus turning the editing
process into a resource and time consuming task. In this work, we show that it
is possible to automatically retrieve the best style seeds for a given image,
thus remarkably reducing the number of human attempts needed to find a good
match. Our approach leverages from recent advances in the field of image
synthesis and adopts a deep architecture for generating a memorable picture
from a given input image and a style seed. Importantly, to automatically select
the best style a novel learning-based solution, also relying on deep models, is
proposed. Our experimental evaluation, conducted on publicly available
benchmarks, demonstrates the effectiveness of the proposed approach for
generating memorable images through automatic style seed selectionComment: Accepted at ACM ICMR 201
- …