246 research outputs found
Curriculum Domain Adaptation for Semantic Segmentation of Urban Scenes
During the last half decade, convolutional neural networks (CNNs) have
triumphed over semantic segmentation, which is one of the core tasks in many
applications such as autonomous driving. However, to train CNNs requires a
considerable amount of data, which is difficult to collect and laborious to
annotate. Recent advances in computer graphics make it possible to train CNNs
on photo-realistic synthetic imagery with computer-generated annotations.
Despite this, the domain mismatch between the real images and the synthetic
data cripples the models' performance. Hence, we propose a curriculum-style
learning approach to minimize the domain gap in urban scenery semantic
segmentation. The curriculum domain adaptation solves easy tasks first to infer
necessary properties about the target domain; in particular, the first task is
to learn global label distributions over images and local distributions over
landmark superpixels. These are easy to estimate because images of urban scenes
have strong idiosyncrasies (e.g., the size and spatial relations of buildings,
streets, cars, etc.). We then train a segmentation network while regularizing
its predictions in the target domain to follow those inferred properties. In
experiments, our method outperforms the baselines on two datasets and two
backbone networks. We also report extensive ablation studies about our
approach.Comment: This is the extended version of the ICCV 2017 paper "Curriculum
Domain Adaptation for Semantic Segmentation of Urban Scenes" with additional
GTA experimen
CRF Learning with CNN Features for Image Segmentation
Conditional Random Rields (CRF) have been widely applied in image
segmentations. While most studies rely on hand-crafted features, we here
propose to exploit a pre-trained large convolutional neural network (CNN) to
generate deep features for CRF learning. The deep CNN is trained on the
ImageNet dataset and transferred to image segmentations here for constructing
potentials of superpixels. Then the CRF parameters are learnt using a
structured support vector machine (SSVM). To fully exploit context information
in inference, we construct spatially related co-occurrence pairwise potentials
and incorporate them into the energy function. This prefers labelling of object
pairs that frequently co-occur in a certain spatial layout and at the same time
avoids implausible labellings during the inference. Extensive experiments on
binary and multi-class segmentation benchmarks demonstrate the promise of the
proposed method. We thus provide new baselines for the segmentation performance
on the Weizmann horse, Graz-02, MSRC-21, Stanford Background and PASCAL VOC
2011 datasets
Coarse-to-Fine Lifted MAP Inference in Computer Vision
There is a vast body of theoretical research on lifted inference in
probabilistic graphical models (PGMs). However, few demonstrations exist where
lifting is applied in conjunction with top of the line applied algorithms. We
pursue the applicability of lifted inference for computer vision (CV), with the
insight that a globally optimal (MAP) labeling will likely have the same label
for two symmetric pixels. The success of our approach lies in efficiently
handling a distinct unary potential on every node (pixel), typical of CV
applications. This allows us to lift the large class of algorithms that model a
CV problem via PGM inference. We propose a generic template for coarse-to-fine
(C2F) inference in CV, which progressively refines an initial coarsely lifted
PGM for varying quality-time trade-offs. We demonstrate the performance of C2F
inference by developing lifted versions of two near state-of-the-art CV
algorithms for stereo vision and interactive image segmentation. We find that,
against flat algorithms, the lifted versions have a much superior anytime
performance, without any loss in final solution quality.Comment: Published in IJCAI 201
Superpixels: An Evaluation of the State-of-the-Art
Superpixels group perceptually similar pixels to create visually meaningful
entities while heavily reducing the number of primitives for subsequent
processing steps. As of these properties, superpixel algorithms have received
much attention since their naming in 2003. By today, publicly available
superpixel algorithms have turned into standard tools in low-level vision. As
such, and due to their quick adoption in a wide range of applications,
appropriate benchmarks are crucial for algorithm selection and comparison.
Until now, the rapidly growing number of algorithms as well as varying
experimental setups hindered the development of a unifying benchmark. We
present a comprehensive evaluation of 28 state-of-the-art superpixel algorithms
utilizing a benchmark focussing on fair comparison and designed to provide new
insights relevant for applications. To this end, we explicitly discuss
parameter optimization and the importance of strictly enforcing connectivity.
Furthermore, by extending well-known metrics, we are able to summarize
algorithm performance independent of the number of generated superpixels,
thereby overcoming a major limitation of available benchmarks. Furthermore, we
discuss runtime, robustness against noise, blur and affine transformations,
implementation details as well as aspects of visual quality. Finally, we
present an overall ranking of superpixel algorithms which redefines the
state-of-the-art and enables researchers to easily select appropriate
algorithms and the corresponding implementations which themselves are made
publicly available as part of our benchmark at
davidstutz.de/projects/superpixel-benchmark/
Learning Transferable Representations for Visual Recognition
In the last half-decade, a new renaissance of machine learning originates from the applications of convolutional neural networks to visual recognition tasks. It is believed that a combination of big curated data and novel deep learning techniques can lead to unprecedented results. However, the increasingly large training data is still a drop in the ocean compared with scenarios in the wild. In this literature, we focus on learning transferable representation in the neural networks to ensure the models stay robust, even given different data distributions. We present three exemplar topics in three chapters, respectively: zero-shot learning, domain adaptation, and generalizable adversarial attack. By zero-shot learning, we enable models to predict labels not seen in the training phase. By domain adaptation, we improve a model\u27s performance on the target domain by mitigating its discrepancy from a labeled source model, without any target annotation. Finally, the generalization adversarial attack focuses on learning an adversarial camouflage that ideally would work in every possible scenario. Despite sharing the same transfer learning philosophy, each of the proposed topics poses a unique challenge requiring a unique solution. In each chapter, we introduce the problem as well as present our solution to the problem. We also discuss some other researchers\u27 approaches and compare our solution to theirs in the experiments
DISC: Deep Image Saliency Computing via Progressive Representation Learning
Salient object detection increasingly receives attention as an important
component or step in several pattern recognition and image processing tasks.
Although a variety of powerful saliency models have been intensively proposed,
they usually involve heavy feature (or model) engineering based on priors (or
assumptions) about the properties of objects and backgrounds. Inspired by the
effectiveness of recently developed feature learning, we provide a novel Deep
Image Saliency Computing (DISC) framework for fine-grained image saliency
computing. In particular, we model the image saliency from both the coarse- and
fine-level observations, and utilize the deep convolutional neural network
(CNN) to learn the saliency representation in a progressive manner.
Specifically, our saliency model is built upon two stacked CNNs. The first CNN
generates a coarse-level saliency map by taking the overall image as the input,
roughly identifying saliency regions in the global context. Furthermore, we
integrate superpixel-based local context information in the first CNN to refine
the coarse-level saliency map. Guided by the coarse saliency map, the second
CNN focuses on the local context to produce fine-grained and accurate saliency
map while preserving object details. For a testing image, the two CNNs
collaboratively conduct the saliency computing in one shot. Our DISC framework
is capable of uniformly highlighting the objects-of-interest from complex
background while preserving well object details. Extensive experiments on
several standard benchmarks suggest that DISC outperforms other
state-of-the-art methods and it also generalizes well across datasets without
additional training. The executable version of DISC is available online:
http://vision.sysu.edu.cn/projects/DISC.Comment: This manuscript is the accepted version for IEEE Transactions on
Neural Networks and Learning Systems (T-NNLS), 201
- …