457 research outputs found
Affective Image Content Analysis: Two Decades Review and New Perspectives
Images can convey rich semantics and induce various emotions in viewers.
Recently, with the rapid advancement of emotional intelligence and the
explosive growth of visual data, extensive research efforts have been dedicated
to affective image content analysis (AICA). In this survey, we will
comprehensively review the development of AICA in the recent two decades,
especially focusing on the state-of-the-art methods with respect to three main
challenges -- the affective gap, perception subjectivity, and label noise and
absence. We begin with an introduction to the key emotion representation models
that have been widely employed in AICA and description of available datasets
for performing evaluation with quantitative comparison of label noise and
dataset bias. We then summarize and compare the representative approaches on
(1) emotion feature extraction, including both handcrafted and deep features,
(2) learning methods on dominant emotion recognition, personalized emotion
prediction, emotion distribution learning, and learning from noisy data or few
labels, and (3) AICA based applications. Finally, we discuss some challenges
and promising research directions in the future, such as image content and
context understanding, group emotion clustering, and viewer-image interaction.Comment: Accepted by IEEE TPAM
Emotional Design: An Overview
Emotional design has been well recognized in the domain of human factors and ergonomics. In this chapter, we reviewed related models and methods of emotional design. We are motivated to encourage emotional designers to take multiple perspectives when examining these models and methods. Then we proposed a systematic process for emotional design, including affective-cognitive needs elicitation, affective-cognitive needs analysis, and affective-cognitive needs fulfillment to support emotional design. Within each step, we provided an updated review of the representative methods to support and offer further guidance on emotional design. We hope researchers and industrial practitioners can take a systematic approach to consider each step in the framework with care. Finally, the speculations on the challenges and future directions can potentially help researchers across different fields to further advance emotional design.http://deepblue.lib.umich.edu/bitstream/2027.42/163319/1/Emotional_Design_Manuscript_Final.pdfSEL
Urban Visual Intelligence: Studying Cities with AI and Street-level Imagery
The visual dimension of cities has been a fundamental subject in urban
studies, since the pioneering work of scholars such as Sitte, Lynch, Arnheim,
and Jacobs. Several decades later, big data and artificial intelligence (AI)
are revolutionizing how people move, sense, and interact with cities. This
paper reviews the literature on the appearance and function of cities to
illustrate how visual information has been used to understand them. A
conceptual framework, Urban Visual Intelligence, is introduced to
systematically elaborate on how new image data sources and AI techniques are
reshaping the way researchers perceive and measure cities, enabling the study
of the physical environment and its interactions with socioeconomic
environments at various scales. The paper argues that these new approaches
enable researchers to revisit the classic urban theories and themes, and
potentially help cities create environments that are more in line with human
behaviors and aspirations in the digital age
Multi-View Graph Fusion for Semi-Supervised Learning: Application to Image-Based Face Beauty Prediction
Facial Beauty Prediction (FBP) is an important visual recognition problem to evaluate the attractiveness of faces according to human perception. Most existing FBP methods are based on supervised solutions using geometric or deep features. Semi-supervised learning for FBP is an almost unexplored research area. In this work, we propose a graph-based semi-supervised method in which multiple graphs are constructed to find the appropriate graph representation of the face images (with and without scores). The proposed method combines both geometric and deep feature-based graphs to produce a high-level representation of face images instead of using a single face descriptor and also improves the discriminative ability of graph-based score propagation methods. In addition to the data graph, our proposed approach fuses an additional graph adaptively built on the predicted beauty values. Experimental results on the SCUTFBP-5500 facial beauty dataset demonstrate the superiority of the proposed algorithm compared to other state-of-the-art methods
- …