5,182 research outputs found
Affective Image Content Analysis: Two Decades Review and New Perspectives
Images can convey rich semantics and induce various emotions in viewers.
Recently, with the rapid advancement of emotional intelligence and the
explosive growth of visual data, extensive research efforts have been dedicated
to affective image content analysis (AICA). In this survey, we will
comprehensively review the development of AICA in the recent two decades,
especially focusing on the state-of-the-art methods with respect to three main
challenges -- the affective gap, perception subjectivity, and label noise and
absence. We begin with an introduction to the key emotion representation models
that have been widely employed in AICA and description of available datasets
for performing evaluation with quantitative comparison of label noise and
dataset bias. We then summarize and compare the representative approaches on
(1) emotion feature extraction, including both handcrafted and deep features,
(2) learning methods on dominant emotion recognition, personalized emotion
prediction, emotion distribution learning, and learning from noisy data or few
labels, and (3) AICA based applications. Finally, we discuss some challenges
and promising research directions in the future, such as image content and
context understanding, group emotion clustering, and viewer-image interaction.Comment: Accepted by IEEE TPAM
Computational Emotion Analysis From Images: Recent Advances and Future Directions
Emotions are usually evoked in humans by images. Recently, extensive research
efforts have been dedicated to understanding the emotions of images. In this
chapter, we aim to introduce image emotion analysis (IEA) from a computational
perspective with the focus on summarizing recent advances and suggesting future
directions. We begin with commonly used emotion representation models from
psychology. We then define the key computational problems that the researchers
have been trying to solve and provide supervised frameworks that are generally
used for different IEA tasks. After the introduction of major challenges in
IEA, we present some representative methods on emotion feature extraction,
supervised classifier learning, and domain adaptation. Furthermore, we
introduce available datasets for evaluation and summarize some main results.
Finally, we discuss some open questions and future directions that researchers
can pursue.Comment: Accepted chapter in the book "Human Perception of Visual Information
Psychological and Computational Perspective
About the nature of Kansei information, from abstract to concrete
Designer’s expertise refers to the scientific fields of emotional design and kansei information. This paper aims to answer to a scientific major issue which is, how to formalize designer’s knowledge, rules, skills into kansei information systems. Kansei can be considered as a psycho-physiologic, perceptive, cognitive and affective process through a particular experience. Kansei oriented methods include various approaches which deal with semantics and emotions, and show the correlation with some design properties. Kansei words may include semantic, sensory, emotional descriptors, and also objects names and product attributes. Kansei levels of information can be seen on an axis going from abstract to concrete dimensions. Sociological value is the most abstract information positioned on this axis. Previous studies demonstrate the values the people aspire to drive their emotional reactions in front of particular semantics. This means that the value dimension should be considered in kansei studies. Through a chain of value-function-product attributes it is possible to enrich design generation and design evaluation processes. This paper describes some knowledge structures and formalisms we established according to this chain, which can be further used for implementing computer aided design tools dedicated to early design. These structures open to new formalisms which enable to integrate design information in a non-hierarchical way. The foreseen algorithmic implementation may be based on the association of ontologies and bag-of-words.AN
Fusion of Learned Multi-Modal Representations and Dense Trajectories for Emotional Analysis in Videos
When designing a video affective content analysis algorithm, one of the most important steps is the selection of discriminative features for the effective representation of video segments. The majority of existing affective content analysis methods either use low-level audio-visual features or generate handcrafted higher level representations based on these low-level features. We propose in this work to use deep learning methods, in particular convolutional neural networks (CNNs), in order to automatically learn and extract mid-level representations from raw data. To this end, we exploit the audio and visual modality of videos by employing Mel-Frequency Cepstral Coefficients (MFCC) and color values in the HSV color space. We also incorporate dense trajectory based motion features in order to further enhance the performance of the analysis. By means of multi-class support vector machines (SVMs) and fusion mechanisms, music video clips are classified into one of four affective categories representing the four quadrants of the Valence-Arousal (VA) space. Results obtained on a subset of the DEAP dataset show (1) that higher level representations perform better than low-level features, and (2) that incorporating motion information leads to a notable performance gain, independently from the chosen representation
High-Level Concepts for Affective Understanding of Images
This paper aims to bridge the affective gap between image content and the
emotional response of the viewer it elicits by using High-Level Concepts
(HLCs). In contrast to previous work that relied solely on low-level features
or used convolutional neural network (CNN) as a black-box, we use HLCs
generated by pretrained CNNs in an explicit way to investigate the
relations/associations between these HLCs and a (small) set of Ekman's
emotional classes. As a proof-of-concept, we first propose a linear admixture
model for modeling these relations, and the resulting computational framework
allows us to determine the associations between each emotion class and certain
HLCs (objects and places). This linear model is further extended to a nonlinear
model using support vector regression (SVR) that aims to predict the viewer's
emotional response using both low-level image features and HLCs extracted from
images. These class-specific regressors are then assembled into a regressor
ensemble that provide a flexible and effective predictor for predicting
viewer's emotional responses from images. Experimental results have
demonstrated that our results are comparable to existing methods, with a clear
view of the association between HLCs and emotional classes that is ostensibly
missing in most existing work
- …