9,621 research outputs found
Half-quadratic transportation problems
We present a primal--dual memory efficient algorithm for solving a relaxed
version of the general transportation problem. Our approach approximates the
original cost function with a differentiable one that is solved as a sequence
of weighted quadratic transportation problems. The new formulation allows us to
solve differentiable, non-- convex transportation problems
Efficient Relaxations for Dense CRFs with Sparse Higher Order Potentials
Dense conditional random fields (CRFs) have become a popular framework for
modelling several problems in computer vision such as stereo correspondence and
multi-class semantic segmentation. By modelling long-range interactions, dense
CRFs provide a labelling that captures finer detail than their sparse
counterparts. Currently, the state-of-the-art algorithm performs mean-field
inference using a filter-based method but fails to provide a strong theoretical
guarantee on the quality of the solution. A question naturally arises as to
whether it is possible to obtain a maximum a posteriori (MAP) estimate of a
dense CRF using a principled method. Within this paper, we show that this is
indeed possible. We will show that, by using a filter-based method, continuous
relaxations of the MAP problem can be optimised efficiently using
state-of-the-art algorithms. Specifically, we will solve a quadratic
programming (QP) relaxation using the Frank-Wolfe algorithm and a linear
programming (LP) relaxation by developing a proximal minimisation framework. By
exploiting labelling consistency in the higher-order potentials and utilising
the filter-based method, we are able to formulate the above algorithms such
that each iteration has a complexity linear in the number of classes and random
variables. The presented algorithms can be applied to any labelling problem
using a dense CRF with sparse higher-order potentials. In this paper, we use
semantic segmentation as an example application as it demonstrates the ability
of the algorithm to scale to dense CRFs with large dimensions. We perform
experiments on the Pascal dataset to indicate that the presented algorithms are
able to attain lower energies than the mean-field inference method
On Optimal Multiple Changepoint Algorithms for Large Data
There is an increasing need for algorithms that can accurately detect
changepoints in long time-series, or equivalent, data. Many common approaches
to detecting changepoints, for example based on penalised likelihood or minimum
description length, can be formulated in terms of minimising a cost over
segmentations. Dynamic programming methods exist to solve this minimisation
problem exactly, but these tend to scale at least quadratically in the length
of the time-series. Algorithms, such as Binary Segmentation, exist that have a
computational cost that is close to linear in the length of the time-series,
but these are not guaranteed to find the optimal segmentation. Recently pruning
ideas have been suggested that can speed up the dynamic programming algorithms,
whilst still being guaranteed to find true minimum of the cost function. Here
we extend these pruning methods, and introduce two new algorithms for
segmenting data, FPOP and SNIP. Empirical results show that FPOP is
substantially faster than existing dynamic programming methods, and unlike the
existing methods its computational efficiency is robust to the number of
changepoints in the data. We evaluate the method at detecting Copy Number
Variations and observe that FPOP has a computational cost that is competitive
with that of Binary Segmentation.Comment: 20 page
Non parametric reconstruction of distribution functions from observed galactic disks
A general inversion technique for the recovery of the underlying distribution
function for observed galactic disks is presented and illustrated. Under the
assumption that these disks are axi-symmetric and thin, the proposed method
yields the unique distribution compatible with all the observables available.
The derivation may be carried out from the measurement of the azimuthal
velocity distribution arising from positioning the slit of a spectrograph along
the major axis of the galaxy. More generally, it may account for the
simultaneous measurements of velocity distributions corresponding to slits
presenting arbitrary orientations with respect to the major axis. The approach
is non-parametric, i.e. it does not rely on a particular algebraic model for
the distribution function. Special care is taken to account for the fraction of
counter-rotating stars which strongly affects the stability of the disk. An
optimisation algorithm is devised -- generalising the work of Skilling & Bryan
(1984) -- to carry this truly two-dimensional ill-conditioned inversion
efficiently. The performances of the overall inversion technique with respect
to the noise level and truncation in the data set is investigated with
simulated data. Reliable results are obtained up to a mean signal to noise
ratio of~5 and when measurements are available up to . A discussion of
the residual biases involved in non parametric inversions is presented.
Prospects of application to observed galaxies and other inversion problems are
discussed.Comment: 11 pages, 13 figures; accepted for publication by MNRA
A combined first and second order variational approach for image reconstruction
In this paper we study a variational problem in the space of functions of
bounded Hessian. Our model constitutes a straightforward higher-order extension
of the well known ROF functional (total variation minimisation) to which we add
a non-smooth second order regulariser. It combines convex functions of the
total variation and the total variation of the first derivatives. In what
follows, we prove existence and uniqueness of minimisers of the combined model
and present the numerical solution of the corresponding discretised problem by
employing the split Bregman method. The paper is furnished with applications of
our model to image denoising, deblurring as well as image inpainting. The
obtained numerical results are compared with results obtained from total
generalised variation (TGV), infimal convolution and Euler's elastica, three
other state of the art higher-order models. The numerical discussion confirms
that the proposed higher-order model competes with models of its kind in
avoiding the creation of undesirable artifacts and blocky-like structures in
the reconstructed images -- a known disadvantage of the ROF model -- while
being simple and efficiently numerically solvable.Comment: 34 pages, 89 figure
- âŠ