9,633 research outputs found
Context-Dependent Diffusion Network for Visual Relationship Detection
Visual relationship detection can bridge the gap between computer vision and
natural language for scene understanding of images. Different from pure object
recognition tasks, the relation triplets of subject-predicate-object lie on an
extreme diversity space, such as \textit{person-behind-person} and
\textit{car-behind-building}, while suffering from the problem of combinatorial
explosion. In this paper, we propose a context-dependent diffusion network
(CDDN) framework to deal with visual relationship detection. To capture the
interactions of different object instances, two types of graphs, word semantic
graph and visual scene graph, are constructed to encode global context
interdependency. The semantic graph is built through language priors to model
semantic correlations across objects, whilst the visual scene graph defines the
connections of scene objects so as to utilize the surrounding scene
information. For the graph-structured data, we design a diffusion network to
adaptively aggregate information from contexts, which can effectively learn
latent representations of visual relationships and well cater to visual
relationship detection in view of its isomorphic invariance to graphs.
Experiments on two widely-used datasets demonstrate that our proposed method is
more effective and achieves the state-of-the-art performance.Comment: 8 pages, 3 figures, 2018 ACM Multimedia Conference (MM'18
Attention-Aware Face Hallucination via Deep Reinforcement Learning
Face hallucination is a domain-specific super-resolution problem with the
goal to generate high-resolution (HR) faces from low-resolution (LR) input
images. In contrast to existing methods that often learn a single
patch-to-patch mapping from LR to HR images and are regardless of the
contextual interdependency between patches, we propose a novel Attention-aware
Face Hallucination (Attention-FH) framework which resorts to deep reinforcement
learning for sequentially discovering attended patches and then performing the
facial part enhancement by fully exploiting the global interdependency of the
image. Specifically, in each time step, the recurrent policy network is
proposed to dynamically specify a new attended region by incorporating what
happened in the past. The state (i.e., face hallucination result for the whole
image) can thus be exploited and updated by the local enhancement network on
the selected region. The Attention-FH approach jointly learns the recurrent
policy network and local enhancement network through maximizing the long-term
reward that reflects the hallucination performance over the whole image.
Therefore, our proposed Attention-FH is capable of adaptively personalizing an
optimal searching path for each face image according to its own characteristic.
Extensive experiments show our approach significantly surpasses the
state-of-the-arts on in-the-wild faces with large pose and illumination
variations
A2-RL: Aesthetics Aware Reinforcement Learning for Image Cropping
Image cropping aims at improving the aesthetic quality of images by adjusting
their composition. Most weakly supervised cropping methods (without bounding
box supervision) rely on the sliding window mechanism. The sliding window
mechanism requires fixed aspect ratios and limits the cropping region with
arbitrary size. Moreover, the sliding window method usually produces tens of
thousands of windows on the input image which is very time-consuming. Motivated
by these challenges, we firstly formulate the aesthetic image cropping as a
sequential decision-making process and propose a weakly supervised Aesthetics
Aware Reinforcement Learning (A2-RL) framework to address this problem.
Particularly, the proposed method develops an aesthetics aware reward function
which especially benefits image cropping. Similar to human's decision making,
we use a comprehensive state representation including both the current
observation and the historical experience. We train the agent using the
actor-critic architecture in an end-to-end manner. The agent is evaluated on
several popular unseen cropping datasets. Experiment results show that our
method achieves the state-of-the-art performance with much fewer candidate
windows and much less time compared with previous weakly supervised methods.Comment: Accepted by CVPR 201
- …