60,524 research outputs found
Reversible GANs for Memory-efficient Image-to-Image Translation
The Pix2pix and CycleGAN losses have vastly improved the qualitative and
quantitative visual quality of results in image-to-image translation tasks. We
extend this framework by exploring approximately invertible architectures which
are well suited to these losses. These architectures are approximately
invertible by design and thus partially satisfy cycle-consistency before
training even begins. Furthermore, since invertible architectures have constant
memory complexity in depth, these models can be built arbitrarily deep. We are
able to demonstrate superior quantitative output on the Cityscapes and Maps
datasets at near constant memory budget
High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs
We present a new method for synthesizing high-resolution photo-realistic
images from semantic label maps using conditional generative adversarial
networks (conditional GANs). Conditional GANs have enabled a variety of
applications, but the results are often limited to low-resolution and still far
from realistic. In this work, we generate 2048x1024 visually appealing results
with a novel adversarial loss, as well as new multi-scale generator and
discriminator architectures. Furthermore, we extend our framework to
interactive visual manipulation with two additional features. First, we
incorporate object instance segmentation information, which enables object
manipulations such as removing/adding objects and changing the object category.
Second, we propose a method to generate diverse results given the same input,
allowing users to edit the object appearance interactively. Human opinion
studies demonstrate that our method significantly outperforms existing methods,
advancing both the quality and the resolution of deep image synthesis and
editing.Comment: v2: CVPR camera ready, adding more results for edge-to-photo example
Dense semantic labeling of sub-decimeter resolution images with convolutional neural networks
Semantic labeling (or pixel-level land-cover classification) in ultra-high
resolution imagery (< 10cm) requires statistical models able to learn high
level concepts from spatial data, with large appearance variations.
Convolutional Neural Networks (CNNs) achieve this goal by learning
discriminatively a hierarchy of representations of increasing abstraction.
In this paper we present a CNN-based system relying on an
downsample-then-upsample architecture. Specifically, it first learns a rough
spatial map of high-level representations by means of convolutions and then
learns to upsample them back to the original resolution by deconvolutions. By
doing so, the CNN learns to densely label every pixel at the original
resolution of the image. This results in many advantages, including i)
state-of-the-art numerical accuracy, ii) improved geometric accuracy of
predictions and iii) high efficiency at inference time.
We test the proposed system on the Vaihingen and Potsdam sub-decimeter
resolution datasets, involving semantic labeling of aerial images of 9cm and
5cm resolution, respectively. These datasets are composed by many large and
fully annotated tiles allowing an unbiased evaluation of models making use of
spatial information. We do so by comparing two standard CNN architectures to
the proposed one: standard patch classification, prediction of local label
patches by employing only convolutions and full patch labeling by employing
deconvolutions. All the systems compare favorably or outperform a
state-of-the-art baseline relying on superpixels and powerful appearance
descriptors. The proposed full patch labeling CNN outperforms these models by a
large margin, also showing a very appealing inference time.Comment: Accepted in IEEE Transactions on Geoscience and Remote Sensing, 201
FSS-1000: A 1000-Class Dataset for Few-Shot Segmentation
Over the past few years, we have witnessed the success of deep learning in
image recognition thanks to the availability of large-scale human-annotated
datasets such as PASCAL VOC, ImageNet, and COCO. Although these datasets have
covered a wide range of object categories, there are still a significant number
of objects that are not included. Can we perform the same task without a lot of
human annotations? In this paper, we are interested in few-shot object
segmentation where the number of annotated training examples are limited to 5
only. To evaluate and validate the performance of our approach, we have built a
few-shot segmentation dataset, FSS-1000, which consists of 1000 object classes
with pixelwise annotation of ground-truth segmentation. Unique in FSS-1000, our
dataset contains significant number of objects that have never been seen or
annotated in previous datasets, such as tiny daily objects, merchandise,
cartoon characters, logos, etc. We build our baseline model using standard
backbone networks such as VGG-16, ResNet-101, and Inception. To our surprise,
we found that training our model from scratch using FSS-1000 achieves
comparable and even better results than training with weights pre-trained by
ImageNet which is more than 100 times larger than FSS-1000. Both our approach
and dataset are simple, effective, and easily extensible to learn segmentation
of new object classes given very few annotated training examples. Dataset is
available at https://github.com/HKUSTCV/FSS-1000
- …