2,968 research outputs found
Fast, Exact and Multi-Scale Inference for Semantic Image Segmentation with Deep Gaussian CRFs
In this work we propose a structured prediction technique that combines the
virtues of Gaussian Conditional Random Fields (G-CRF) with Deep Learning: (a)
our structured prediction task has a unique global optimum that is obtained
exactly from the solution of a linear system (b) the gradients of our model
parameters are analytically computed using closed form expressions, in contrast
to the memory-demanding contemporary deep structured prediction approaches that
rely on back-propagation-through-time, (c) our pairwise terms do not have to be
simple hand-crafted expressions, as in the line of works building on the
DenseCRF, but can rather be `discovered' from data through deep architectures,
and (d) out system can trained in an end-to-end manner. Building on standard
tools from numerical analysis we develop very efficient algorithms for
inference and learning, as well as a customized technique adapted to the
semantic segmentation task. This efficiency allows us to explore more
sophisticated architectures for structured prediction in deep learning: we
introduce multi-resolution architectures to couple information across scales in
a joint optimization framework, yielding systematic improvements. We
demonstrate the utility of our approach on the challenging VOC PASCAL 2012
image segmentation benchmark, showing substantial improvements over strong
baselines. We make all of our code and experiments available at
{https://github.com/siddharthachandra/gcrf}Comment: Our code is available at https://github.com/siddharthachandra/gcr
Hierarchical Schur complement preconditioner for the stochastic Galerkin finite element methods
Use of the stochastic Galerkin finite element methods leads to large systems
of linear equations obtained by the discretization of tensor product solution
spaces along their spatial and stochastic dimensions. These systems are
typically solved iteratively by a Krylov subspace method. We propose a
preconditioner which takes an advantage of the recursive hierarchy in the
structure of the global matrices. In particular, the matrices posses a
recursive hierarchical two-by-two structure, with one of the submatrices block
diagonal. Each one of the diagonal blocks in this submatrix is closely related
to the deterministic mean-value problem, and the action of its inverse is in
the implementation approximated by inner loops of Krylov iterations. Thus our
hierarchical Schur complement preconditioner combines, on each level in the
approximation of the hierarchical structure of the global matrix, the idea of
Schur complement with loops for a number of mutually independent inner Krylov
iterations, and several matrix-vector multiplications for the off-diagonal
blocks. Neither the global matrix, nor the matrix of the preconditioner need to
be formed explicitly. The ingredients include only the number of stiffness
matrices from the truncated Karhunen-Lo\`{e}ve expansion and a good
preconditioned for the mean-value deterministic problem. We provide a condition
number bound for a model elliptic problem and the performance of the method is
illustrated by numerical experiments.Comment: 15 pages, 2 figures, 9 tables, (updated numerical experiments
A Parallel Solver for Graph Laplacians
Problems from graph drawing, spectral clustering, network flow and graph
partitioning can all be expressed in terms of graph Laplacian matrices. There
are a variety of practical approaches to solving these problems in serial.
However, as problem sizes increase and single core speeds stagnate, parallelism
is essential to solve such problems quickly. We present an unsmoothed
aggregation multigrid method for solving graph Laplacians in a distributed
memory setting. We introduce new parallel aggregation and low degree
elimination algorithms targeted specifically at irregular degree graphs. These
algorithms are expressed in terms of sparse matrix-vector products using
generalized sum and product operations. This formulation is amenable to linear
algebra using arbitrary distributions and allows us to operate on a 2D sparse
matrix distribution, which is necessary for parallel scalability. Our solver
outperforms the natural parallel extension of the current state of the art in
an algorithmic comparison. We demonstrate scalability to 576 processes and
graphs with up to 1.7 billion edges.Comment: PASC '18, Code: https://github.com/ligmg/ligm
- …