568 research outputs found
Deformable GANs for Pose-based Human Image Generation
In this paper we address the problem of generating person images conditioned
on a given pose. Specifically, given an image of a person and a target pose, we
synthesize a new image of that person in the novel pose. In order to deal with
pixel-to-pixel misalignments caused by the pose differences, we introduce
deformable skip connections in the generator of our Generative Adversarial
Network. Moreover, a nearest-neighbour loss is proposed instead of the common
L1 and L2 losses in order to match the details of the generated image with the
target image. We test our approach using photos of persons in different poses
and we compare our method with previous work in this area showing
state-of-the-art results in two benchmarks. Our method can be applied to the
wider field of deformable object generation, provided that the pose of the
articulated object can be extracted using a keypoint detector.Comment: CVPR 2018 versio
Regularized Evolutionary Algorithm for Dynamic Neural Topology Search
Designing neural networks for object recognition requires considerable
architecture engineering. As a remedy, neuro-evolutionary network architecture
search, which automatically searches for optimal network architectures using
evolutionary algorithms, has recently become very popular. Although very
effective, evolutionary algorithms rely heavily on having a large population of
individuals (i.e., network architectures) and is therefore memory expensive. In
this work, we propose a Regularized Evolutionary Algorithm with low memory
footprint to evolve a dynamic image classifier. In details, we introduce novel
custom operators that regularize the evolutionary process of a micro-population
of 10 individuals. We conduct experiments on three different digits datasets
(MNIST, USPS, SVHN) and show that our evolutionary method obtains competitive
results with the current state-of-the-art
Self Paced Deep Learning for Weakly Supervised Object Detection
In a weakly-supervised scenario object detectors need to be trained using
image-level annotation alone. Since bounding-box-level ground truth is not
available, most of the solutions proposed so far are based on an iterative,
Multiple Instance Learning framework in which the current classifier is used to
select the highest-confidence boxes in each image, which are treated as
pseudo-ground truth in the next training iteration. However, the errors of an
immature classifier can make the process drift, usually introducing many of
false positives in the training dataset. To alleviate this problem, we propose
in this paper a training protocol based on the self-paced learning paradigm.
The main idea is to iteratively select a subset of images and boxes that are
the most reliable, and use them for training. While in the past few years
similar strategies have been adopted for SVMs and other classifiers, we are the
first showing that a self-paced approach can be used with deep-network-based
classifiers in an end-to-end training pipeline. The method we propose is built
on the fully-supervised Fast-RCNN architecture and can be applied to similar
architectures which represent the input image as a bag of boxes. We show
state-of-the-art results on Pascal VOC 2007, Pascal VOC 2010 and ILSVRC 2013.
On ILSVRC 2013 our results based on a low-capacity AlexNet network outperform
even those weakly-supervised approaches which are based on much higher-capacity
networks.Comment: To appear at IEEE Transactions on PAM
- …