1,335 research outputs found
A2-RL: Aesthetics Aware Reinforcement Learning for Image Cropping
Image cropping aims at improving the aesthetic quality of images by adjusting
their composition. Most weakly supervised cropping methods (without bounding
box supervision) rely on the sliding window mechanism. The sliding window
mechanism requires fixed aspect ratios and limits the cropping region with
arbitrary size. Moreover, the sliding window method usually produces tens of
thousands of windows on the input image which is very time-consuming. Motivated
by these challenges, we firstly formulate the aesthetic image cropping as a
sequential decision-making process and propose a weakly supervised Aesthetics
Aware Reinforcement Learning (A2-RL) framework to address this problem.
Particularly, the proposed method develops an aesthetics aware reward function
which especially benefits image cropping. Similar to human's decision making,
we use a comprehensive state representation including both the current
observation and the historical experience. We train the agent using the
actor-critic architecture in an end-to-end manner. The agent is evaluated on
several popular unseen cropping datasets. Experiment results show that our
method achieves the state-of-the-art performance with much fewer candidate
windows and much less time compared with previous weakly supervised methods.Comment: Accepted by CVPR 201
Aesthetic-Driven Image Enhancement by Adversarial Learning
We introduce EnhanceGAN, an adversarial learning based model that performs
automatic image enhancement. Traditional image enhancement frameworks typically
involve training models in a fully-supervised manner, which require expensive
annotations in the form of aligned image pairs. In contrast to these
approaches, our proposed EnhanceGAN only requires weak supervision (binary
labels on image aesthetic quality) and is able to learn enhancement operators
for the task of aesthetic-based image enhancement. In particular, we show the
effectiveness of a piecewise color enhancement module trained with weak
supervision, and extend the proposed EnhanceGAN framework to learning a deep
filtering-based aesthetic enhancer. The full differentiability of our image
enhancement operators enables the training of EnhanceGAN in an end-to-end
manner. We further demonstrate the capability of EnhanceGAN in learning
aesthetic-based image cropping without any groundtruth cropping pairs. Our
weakly-supervised EnhanceGAN reports competitive quantitative results on
aesthetic-based color enhancement as well as automatic image cropping, and a
user study confirms that our image enhancement results are on par with or even
preferred over professional enhancement
On the Importance of Visual Context for Data Augmentation in Scene Understanding
Performing data augmentation for learning deep neural networks is known to be
important for training visual recognition systems. By artificially increasing
the number of training examples, it helps reducing overfitting and improves
generalization. While simple image transformations can already improve
predictive performance in most vision tasks, larger gains can be obtained by
leveraging task-specific prior knowledge. In this work, we consider object
detection, semantic and instance segmentation and augment the training images
by blending objects in existing scenes, using instance segmentation
annotations. We observe that randomly pasting objects on images hurts the
performance, unless the object is placed in the right context. To resolve this
issue, we propose an explicit context model by using a convolutional neural
network, which predicts whether an image region is suitable for placing a given
object or not. In our experiments, we show that our approach is able to improve
object detection, semantic and instance segmentation on the PASCAL VOC12 and
COCO datasets, with significant gains in a limited annotation scenario, i.e.
when only one category is annotated. We also show that the method is not
limited to datasets that come with expensive pixel-wise instance annotations
and can be used when only bounding boxes are available, by employing
weakly-supervised learning for instance masks approximation.Comment: Updated the experimental section. arXiv admin note: substantial text
overlap with arXiv:1807.0742
End-to-end weakly-supervised semantic alignment
We tackle the task of semantic alignment where the goal is to compute dense
semantic correspondence aligning two images depicting objects of the same
category. This is a challenging task due to large intra-class variation,
changes in viewpoint and background clutter. We present the following three
principal contributions. First, we develop a convolutional neural network
architecture for semantic alignment that is trainable in an end-to-end manner
from weak image-level supervision in the form of matching image pairs. The
outcome is that parameters are learnt from rich appearance variation present in
different but semantically related images without the need for tedious manual
annotation of correspondences at training time. Second, the main component of
this architecture is a differentiable soft inlier scoring module, inspired by
the RANSAC inlier scoring procedure, that computes the quality of the alignment
based on only geometrically consistent correspondences thereby reducing the
effect of background clutter. Third, we demonstrate that the proposed approach
achieves state-of-the-art performance on multiple standard benchmarks for
semantic alignment.Comment: In 2018 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR 2018
In the Wild Human Pose Estimation Using Explicit 2D Features and Intermediate 3D Representations
Convolutional Neural Network based approaches for monocular 3D human pose
estimation usually require a large amount of training images with 3D pose
annotations. While it is feasible to provide 2D joint annotations for large
corpora of in-the-wild images with humans, providing accurate 3D annotations to
such in-the-wild corpora is hardly feasible in practice. Most existing 3D
labelled data sets are either synthetically created or feature in-studio
images. 3D pose estimation algorithms trained on such data often have limited
ability to generalize to real world scene diversity. We therefore propose a new
deep learning based method for monocular 3D human pose estimation that shows
high accuracy and generalizes better to in-the-wild scenes. It has a network
architecture that comprises a new disentangled hidden space encoding of
explicit 2D and 3D features, and uses supervision by a new learned projection
model from predicted 3D pose. Our algorithm can be jointly trained on image
data with 3D labels and image data with only 2D labels. It achieves
state-of-the-art accuracy on challenging in-the-wild data.Comment: Accepted to CVPR 201
In the Wild Human Pose Estimation Using Explicit 2D Features and Intermediate 3D Representations
Convolutional Neural Network based approaches for monocular 3D human pose estimation usually require a large amount of training images with 3D pose annotations. While it is feasible to provide 2D joint annotations for large corpora of in-the-wild images with humans, providing accurate 3D annotations to such in-the-wild corpora is hardly feasible in practice. Most existing 3D labelled data sets are either synthetically created or feature in-studio images. 3D pose estimation algorithms trained on such data often have limited ability to generalize to real world scene diversity. We therefore propose a new deep learning based method for monocular 3D human pose estimation that shows high accuracy and generalizes better to in-the-wild scenes. It has a network architecture that comprises a new disentangled hidden space encoding of explicit 2D and 3D features, and uses supervision by a new learned projection model from predicted 3D pose. Our algorithm can be jointly trained on image data with 3D labels and image data with only 2D labels. It achieves state-of-the-art accuracy on challenging in-the-wild data
- …