481,998 research outputs found
Transcribing Content from Structural Images with Spotlight Mechanism
Transcribing content from structural images, e.g., writing notes from music
scores, is a challenging task as not only the content objects should be
recognized, but the internal structure should also be preserved. Existing image
recognition methods mainly work on images with simple content (e.g., text lines
with characters), but are not capable to identify ones with more complex
content (e.g., structured symbols), which often follow a fine-grained grammar.
To this end, in this paper, we propose a hierarchical Spotlight Transcribing
Network (STN) framework followed by a two-stage "where-to-what" solution.
Specifically, we first decide "where-to-look" through a novel spotlight
mechanism to focus on different areas of the original image following its
structure. Then, we decide "what-to-write" by developing a GRU based network
with the spotlight areas for transcribing the content accordingly. Moreover, we
propose two implementations on the basis of STN, i.e., STNM and STNR, where the
spotlight movement follows the Markov property and Recurrent modeling,
respectively. We also design a reinforcement method to refine the framework by
self-improving the spotlight mechanism. We conduct extensive experiments on
many structural image datasets, where the results clearly demonstrate the
effectiveness of STN framework.Comment: Accepted by KDD2018 Research Track. In proceedings of the 24th ACM
SIGKDD International Conference on Knowledge Discovery and Data Mining
(KDD'18
SentiCap: Generating Image Descriptions with Sentiments
The recent progress on image recognition and language modeling is making
automatic description of image content a reality. However, stylized,
non-factual aspects of the written description are missing from the current
systems. One such style is descriptions with emotions, which is commonplace in
everyday communication, and influences decision-making and interpersonal
relationships. We design a system to describe an image with emotions, and
present a model that automatically generates captions with positive or negative
sentiments. We propose a novel switching recurrent neural network with
word-level regularization, which is able to produce emotional image captions
using only 2000+ training sentences containing sentiments. We evaluate the
captions with different automatic and crowd-sourcing metrics. Our model
compares favourably in common quality metrics for image captioning. In 84.6% of
cases the generated positive captions were judged as being at least as
descriptive as the factual captions. Of these positive captions 88% were
confirmed by the crowd-sourced workers as having the appropriate sentiment
- …