12,567 research outputs found
Discriminative Training of Deep Fully-connected Continuous CRF with Task-specific Loss
Recent works on deep conditional random fields (CRF) have set new records on
many vision tasks involving structured predictions. Here we propose a
fully-connected deep continuous CRF model for both discrete and continuous
labelling problems. We exemplify the usefulness of the proposed model on
multi-class semantic labelling (discrete) and the robust depth estimation
(continuous) problems.
In our framework, we model both the unary and the pairwise potential
functions as deep convolutional neural networks (CNN), which are jointly
learned in an end-to-end fashion. The proposed method possesses the main
advantage of continuously-valued CRF, which is a closed-form solution for the
Maximum a posteriori (MAP) inference.
To better adapt to different tasks, instead of using the commonly employed
maximum likelihood CRF parameter learning protocol, we propose task-specific
loss functions for learning the CRF parameters.
It enables direct optimization of the quality of the MAP estimates during the
course of learning.
Specifically, we optimize the multi-class classification loss for the
semantic labelling task and the Turkey's biweight loss for the robust depth
estimation problem.
Experimental results on the semantic labelling and robust depth estimation
tasks demonstrate that the proposed method compare favorably against both
baseline and state-of-the-art methods.
In particular, we show that although the proposed deep CRF model is
continuously valued, with the equipment of task-specific loss, it achieves
impressive results even on discrete labelling tasks
Fast, Exact and Multi-Scale Inference for Semantic Image Segmentation with Deep Gaussian CRFs
In this work we propose a structured prediction technique that combines the
virtues of Gaussian Conditional Random Fields (G-CRF) with Deep Learning: (a)
our structured prediction task has a unique global optimum that is obtained
exactly from the solution of a linear system (b) the gradients of our model
parameters are analytically computed using closed form expressions, in contrast
to the memory-demanding contemporary deep structured prediction approaches that
rely on back-propagation-through-time, (c) our pairwise terms do not have to be
simple hand-crafted expressions, as in the line of works building on the
DenseCRF, but can rather be `discovered' from data through deep architectures,
and (d) out system can trained in an end-to-end manner. Building on standard
tools from numerical analysis we develop very efficient algorithms for
inference and learning, as well as a customized technique adapted to the
semantic segmentation task. This efficiency allows us to explore more
sophisticated architectures for structured prediction in deep learning: we
introduce multi-resolution architectures to couple information across scales in
a joint optimization framework, yielding systematic improvements. We
demonstrate the utility of our approach on the challenging VOC PASCAL 2012
image segmentation benchmark, showing substantial improvements over strong
baselines. We make all of our code and experiments available at
{https://github.com/siddharthachandra/gcrf}Comment: Our code is available at https://github.com/siddharthachandra/gcr
Deep Learning for Semantic Part Segmentation with High-Level Guidance
In this work we address the task of segmenting an object into its parts, or
semantic part segmentation. We start by adapting a state-of-the-art semantic
segmentation system to this task, and show that a combination of a
fully-convolutional Deep CNN system coupled with Dense CRF labelling provides
excellent results for a broad range of object categories. Still, this approach
remains agnostic to high-level constraints between object parts. We introduce
such prior information by means of the Restricted Boltzmann Machine, adapted to
our task and train our model in an discriminative fashion, as a hidden CRF,
demonstrating that prior information can yield additional improvements. We also
investigate the performance of our approach ``in the wild'', without
information concerning the objects' bounding boxes, using an object detector to
guide a multi-scale segmentation scheme. We evaluate the performance of our
approach on the Penn-Fudan and LFW datasets for the tasks of pedestrian parsing
and face labelling respectively. We show superior performance with respect to
competitive methods that have been extensively engineered on these benchmarks,
as well as realistic qualitative results on part segmentation, even for
occluded or deformable objects. We also provide quantitative and extensive
qualitative results on three classes from the PASCAL Parts dataset. Finally, we
show that our multi-scale segmentation scheme can boost accuracy, recovering
segmentations for finer parts.Comment: 11 pages (including references), 3 figures, 2 table
Exploring Context with Deep Structured models for Semantic Segmentation
State-of-the-art semantic image segmentation methods are mostly based on
training deep convolutional neural networks (CNNs). In this work, we proffer to
improve semantic segmentation with the use of contextual information. In
particular, we explore `patch-patch' context and `patch-background' context in
deep CNNs. We formulate deep structured models by combining CNNs and
Conditional Random Fields (CRFs) for learning the patch-patch context between
image regions. Specifically, we formulate CNN-based pairwise potential
functions to capture semantic correlations between neighboring patches.
Efficient piecewise training of the proposed deep structured model is then
applied in order to avoid repeated expensive CRF inference during the course of
back propagation. For capturing the patch-background context, we show that a
network design with traditional multi-scale image inputs and sliding pyramid
pooling is very effective for improving performance. We perform comprehensive
evaluation of the proposed method. We achieve new state-of-the-art performance
on a number of challenging semantic segmentation datasets including ,
-, , -, -,
-, and datasets. Particularly, we report an
intersection-over-union score of on the - dataset.Comment: 16 pages. Accepted to IEEE T. Pattern Analysis & Machine
Intelligence, 2017. Extended version of arXiv:1504.0101
Active Mean Fields for Probabilistic Image Segmentation: Connections with Chan-Vese and Rudin-Osher-Fatemi Models
Segmentation is a fundamental task for extracting semantically meaningful
regions from an image. The goal of segmentation algorithms is to accurately
assign object labels to each image location. However, image-noise, shortcomings
of algorithms, and image ambiguities cause uncertainty in label assignment.
Estimating the uncertainty in label assignment is important in multiple
application domains, such as segmenting tumors from medical images for
radiation treatment planning. One way to estimate these uncertainties is
through the computation of posteriors of Bayesian models, which is
computationally prohibitive for many practical applications. On the other hand,
most computationally efficient methods fail to estimate label uncertainty. We
therefore propose in this paper the Active Mean Fields (AMF) approach, a
technique based on Bayesian modeling that uses a mean-field approximation to
efficiently compute a segmentation and its corresponding uncertainty. Based on
a variational formulation, the resulting convex model combines any
label-likelihood measure with a prior on the length of the segmentation
boundary. A specific implementation of that model is the Chan-Vese segmentation
model (CV), in which the binary segmentation task is defined by a Gaussian
likelihood and a prior regularizing the length of the segmentation boundary.
Furthermore, the Euler-Lagrange equations derived from the AMF model are
equivalent to those of the popular Rudin-Osher-Fatemi (ROF) model for image
denoising. Solutions to the AMF model can thus be implemented by directly
utilizing highly-efficient ROF solvers on log-likelihood ratio fields. We
qualitatively assess the approach on synthetic data as well as on real natural
and medical images. For a quantitative evaluation, we apply our approach to the
icgbench dataset
Multiscale Fields of Patterns
We describe a framework for defining high-order image models that can be used
in a variety of applications. The approach involves modeling local patterns in
a multiscale representation of an image. Local properties of a coarsened image
reflect non-local properties of the original image. In the case of binary
images local properties are defined by the binary patterns observed over small
neighborhoods around each pixel. With the multiscale representation we capture
the frequency of patterns observed at different scales of resolution. This
framework leads to expressive priors that depend on a relatively small number
of parameters. For inference and learning we use an MCMC method for block
sampling with very large blocks. We evaluate the approach with two example
applications. One involves contour detection. The other involves binary
segmentation.Comment: In NIPS 201
Efficient Relaxations for Dense CRFs with Sparse Higher Order Potentials
Dense conditional random fields (CRFs) have become a popular framework for
modelling several problems in computer vision such as stereo correspondence and
multi-class semantic segmentation. By modelling long-range interactions, dense
CRFs provide a labelling that captures finer detail than their sparse
counterparts. Currently, the state-of-the-art algorithm performs mean-field
inference using a filter-based method but fails to provide a strong theoretical
guarantee on the quality of the solution. A question naturally arises as to
whether it is possible to obtain a maximum a posteriori (MAP) estimate of a
dense CRF using a principled method. Within this paper, we show that this is
indeed possible. We will show that, by using a filter-based method, continuous
relaxations of the MAP problem can be optimised efficiently using
state-of-the-art algorithms. Specifically, we will solve a quadratic
programming (QP) relaxation using the Frank-Wolfe algorithm and a linear
programming (LP) relaxation by developing a proximal minimisation framework. By
exploiting labelling consistency in the higher-order potentials and utilising
the filter-based method, we are able to formulate the above algorithms such
that each iteration has a complexity linear in the number of classes and random
variables. The presented algorithms can be applied to any labelling problem
using a dense CRF with sparse higher-order potentials. In this paper, we use
semantic segmentation as an example application as it demonstrates the ability
of the algorithm to scale to dense CRFs with large dimensions. We perform
experiments on the Pascal dataset to indicate that the presented algorithms are
able to attain lower energies than the mean-field inference method
- …