1,525 research outputs found
Attend to You: Personalized Image Captioning with Context Sequence Memory Networks
We address personalization issues of image captioning, which have not been
discussed yet in previous research. For a query image, we aim to generate a
descriptive sentence, accounting for prior knowledge such as the user's active
vocabularies in previous documents. As applications of personalized image
captioning, we tackle two post automation tasks: hashtag prediction and post
generation, on our newly collected Instagram dataset, consisting of 1.1M posts
from 6.3K users. We propose a novel captioning model named Context Sequence
Memory Network (CSMN). Its unique updates over previous memory network models
include (i) exploiting memory as a repository for multiple types of context
information, (ii) appending previously generated words into memory to capture
long-term information without suffering from the vanishing gradient problem,
and (iii) adopting CNN memory structure to jointly represent nearby ordered
memory slots for better context understanding. With quantitative evaluation and
user studies via Amazon Mechanical Turk, we show the effectiveness of the three
novel features of CSMN and its performance enhancement for personalized image
captioning over state-of-the-art captioning models.Comment: Accepted paper at CVPR 201
FaceAtt: Enhancing Image Captioning with Facial Attributes for Portrait Images
Automated image caption generation is a critical area of research that
enhances accessibility and understanding of visual content for diverse
audiences. In this study, we propose the FaceAtt model, a novel approach to
attribute-focused image captioning that emphasizes the accurate depiction of
facial attributes within images. FaceAtt automatically detects and describes a
wide range of attributes, including emotions, expressions, pointed noses, fair
skin tones, hair textures, attractiveness, and approximate age ranges.
Leveraging deep learning techniques, we explore the impact of different image
feature extraction methods on caption quality and evaluate our model's
performance using metrics such as BLEU and METEOR. Our FaceAtt model leverages
annotated attributes of portraits as supplementary prior knowledge for our
portrait images before captioning. This innovative addition yields a subtle yet
discernible enhancement in the resulting scores, exemplifying the potency of
incorporating additional attribute vectors during training. Furthermore, our
research contributes to the broader discourse on ethical considerations in
automated captioning. This study sets the stage for future research in refining
attribute-focused captioning techniques, with a focus on enhancing linguistic
coherence, addressing biases, and accommodating diverse user needs
Facial Action Unit Detection Using Attention and Relation Learning
Attention mechanism has recently attracted increasing attentions in the field
of facial action unit (AU) detection. By finding the region of interest of each
AU with the attention mechanism, AU-related local features can be captured.
Most of the existing attention based AU detection works use prior knowledge to
predefine fixed attentions or refine the predefined attentions within a small
range, which limits their capacity to model various AUs. In this paper, we
propose an end-to-end deep learning based attention and relation learning
framework for AU detection with only AU labels, which has not been explored
before. In particular, multi-scale features shared by each AU are learned
firstly, and then both channel-wise and spatial attentions are adaptively
learned to select and extract AU-related local features. Moreover, pixel-level
relations for AUs are further captured to refine spatial attentions so as to
extract more relevant local features. Without changing the network
architecture, our framework can be easily extended for AU intensity estimation.
Extensive experiments show that our framework (i) soundly outperforms the
state-of-the-art methods for both AU detection and AU intensity estimation on
the challenging BP4D, DISFA, FERA 2015 and BP4D+ benchmarks, (ii) can
adaptively capture the correlated regions of each AU, and (iii) also works well
under severe occlusions and large poses.Comment: This paper is accepted by IEEE Transactions on Affective Computin
Spatio-temporal Video Re-localization by Warp LSTM
The need for efficiently finding the video content a user wants is increasing
because of the erupting of user-generated videos on the Web. Existing
keyword-based or content-based video retrieval methods usually determine what
occurs in a video but not when and where. In this paper, we make an answer to
the question of when and where by formulating a new task, namely
spatio-temporal video re-localization. Specifically, given a query video and a
reference video, spatio-temporal video re-localization aims to localize
tubelets in the reference video such that the tubelets semantically correspond
to the query. To accurately localize the desired tubelets in the reference
video, we propose a novel warp LSTM network, which propagates the
spatio-temporal information for a long period and thereby captures the
corresponding long-term dependencies. Another issue for spatio-temporal video
re-localization is the lack of properly labeled video datasets. Therefore, we
reorganize the videos in the AVA dataset to form a new dataset for
spatio-temporal video re-localization research. Extensive experimental results
show that the proposed model achieves superior performances over the designed
baselines on the spatio-temporal video re-localization task
Object Referring in Videos with Language and Human Gaze
We investigate the problem of object referring (OR) i.e. to localize a target
object in a visual scene coming with a language description. Humans perceive
the world more as continued video snippets than as static images, and describe
objects not only by their appearance, but also by their spatio-temporal context
and motion features. Humans also gaze at the object when they issue a referring
expression. Existing works for OR mostly focus on static images only, which
fall short in providing many such cues. This paper addresses OR in videos with
language and human gaze. To that end, we present a new video dataset for OR,
with 30, 000 objects over 5, 000 stereo video sequences annotated for their
descriptions and gaze. We further propose a novel network model for OR in
videos, by integrating appearance, motion, gaze, and spatio-temporal context
into one network. Experimental results show that our method effectively
utilizes motion cues, human gaze, and spatio-temporal context. Our method
outperforms previousOR methods. For dataset and code, please refer
https://people.ee.ethz.ch/~arunv/ORGaze.html.Comment: Accepted to CVPR 2018, 10 pages, 6 figure
- …