16,903 research outputs found
Deep Convolutional Neural Fields for Depth Estimation from a Single Image
We consider the problem of depth estimation from a single monocular image in
this work. It is a challenging task as no reliable depth cues are available,
e.g., stereo correspondences, motions, etc. Previous efforts have been focusing
on exploiting geometric priors or additional sources of information, with all
using hand-crafted features. Recently, there is mounting evidence that features
from deep convolutional neural networks (CNN) are setting new records for
various vision applications. On the other hand, considering the continuous
characteristic of the depth values, depth estimations can be naturally
formulated into a continuous conditional random field (CRF) learning problem.
Therefore, we in this paper present a deep convolutional neural field model for
estimating depths from a single image, aiming to jointly explore the capacity
of deep CNN and continuous CRF. Specifically, we propose a deep structured
learning scheme which learns the unary and pairwise potentials of continuous
CRF in a unified deep CNN framework.
The proposed method can be used for depth estimations of general scenes with
no geometric priors nor any extra information injected. In our case, the
integral of the partition function can be analytically calculated, thus we can
exactly solve the log-likelihood optimization. Moreover, solving the MAP
problem for predicting depths of a new image is highly efficient as closed-form
solutions exist. We experimentally demonstrate that the proposed method
outperforms state-of-the-art depth estimation methods on both indoor and
outdoor scene datasets.Comment: fixed some typos. in CVPR15 proceeding
Modeling heterogeneity in random graphs through latent space models: a selective review
We present a selective review on probabilistic modeling of heterogeneity in
random graphs. We focus on latent space models and more particularly on
stochastic block models and their extensions that have undergone major
developments in the last five years
Fast, Exact and Multi-Scale Inference for Semantic Image Segmentation with Deep Gaussian CRFs
In this work we propose a structured prediction technique that combines the
virtues of Gaussian Conditional Random Fields (G-CRF) with Deep Learning: (a)
our structured prediction task has a unique global optimum that is obtained
exactly from the solution of a linear system (b) the gradients of our model
parameters are analytically computed using closed form expressions, in contrast
to the memory-demanding contemporary deep structured prediction approaches that
rely on back-propagation-through-time, (c) our pairwise terms do not have to be
simple hand-crafted expressions, as in the line of works building on the
DenseCRF, but can rather be `discovered' from data through deep architectures,
and (d) out system can trained in an end-to-end manner. Building on standard
tools from numerical analysis we develop very efficient algorithms for
inference and learning, as well as a customized technique adapted to the
semantic segmentation task. This efficiency allows us to explore more
sophisticated architectures for structured prediction in deep learning: we
introduce multi-resolution architectures to couple information across scales in
a joint optimization framework, yielding systematic improvements. We
demonstrate the utility of our approach on the challenging VOC PASCAL 2012
image segmentation benchmark, showing substantial improvements over strong
baselines. We make all of our code and experiments available at
{https://github.com/siddharthachandra/gcrf}Comment: Our code is available at https://github.com/siddharthachandra/gcr
Integrated Inference and Learning of Neural Factors in Structural Support Vector Machines
Tackling pattern recognition problems in areas such as computer vision,
bioinformatics, speech or text recognition is often done best by taking into
account task-specific statistical relations between output variables. In
structured prediction, this internal structure is used to predict multiple
outputs simultaneously, leading to more accurate and coherent predictions.
Structural support vector machines (SSVMs) are nonprobabilistic models that
optimize a joint input-output function through margin-based learning. Because
SSVMs generally disregard the interplay between unary and interaction factors
during the training phase, final parameters are suboptimal. Moreover, its
factors are often restricted to linear combinations of input features, limiting
its generalization power. To improve prediction accuracy, this paper proposes:
(i) Joint inference and learning by integration of back-propagation and
loss-augmented inference in SSVM subgradient descent; (ii) Extending SSVM
factors to neural networks that form highly nonlinear functions of input
features. Image segmentation benchmark results demonstrate improvements over
conventional SSVM training methods in terms of accuracy, highlighting the
feasibility of end-to-end SSVM training with neural factors
- …