5,896 research outputs found
Total variation on a tree
We consider the problem of minimizing the continuous valued total variation
subject to different unary terms on trees and propose fast direct algorithms
based on dynamic programming to solve these problems. We treat both the convex
and the non-convex case and derive worst case complexities that are equal or
better than existing methods. We show applications to total variation based 2D
image processing and computer vision problems based on a Lagrangian
decomposition approach. The resulting algorithms are very efficient, offer a
high degree of parallelism and come along with memory requirements which are
only in the order of the number of image pixels.Comment: accepted to SIAM Journal on Imaging Sciences (SIIMS
An Efficient Algorithm for Video Super-Resolution Based On a Sequential Model
In this work, we propose a novel procedure for video super-resolution, that
is the recovery of a sequence of high-resolution images from its low-resolution
counterpart. Our approach is based on a "sequential" model (i.e., each
high-resolution frame is supposed to be a displaced version of the preceding
one) and considers the use of sparsity-enforcing priors. Both the recovery of
the high-resolution images and the motion fields relating them is tackled. This
leads to a large-dimensional, non-convex and non-smooth problem. We propose an
algorithmic framework to address the latter. Our approach relies on fast
gradient evaluation methods and modern optimization techniques for
non-differentiable/non-convex problems. Unlike some other previous works, we
show that there exists a provably-convergent method with a complexity linear in
the problem dimensions. We assess the proposed optimization method on {several
video benchmarks and emphasize its good performance with respect to the state
of the art.}Comment: 37 pages, SIAM Journal on Imaging Sciences, 201
A High-Order Scheme for Image Segmentation via a modified Level-Set method
In this paper we propose a high-order accurate scheme for image segmentation
based on the level-set method. In this approach, the curve evolution is
described as the 0-level set of a representation function but we modify the
velocity that drives the curve to the boundary of the object in order to obtain
a new velocity with additional properties that are extremely useful to develop
a more stable high-order approximation with a small additional cost. The
approximation scheme proposed here is the first 2D version of an adaptive
"filtered" scheme recently introduced and analyzed by the authors in 1D. This
approach is interesting since the implementation of the filtered scheme is
rather efficient and easy. The scheme combines two building blocks (a monotone
scheme and a high-order scheme) via a filter function and smoothness indicators
that allow to detect the regularity of the approximate solution adapting the
scheme in an automatic way. Some numerical tests on synthetic and real images
confirm the accuracy of the proposed method and the advantages given by the new
velocity.Comment: Accepted version for publication in SIAM Journal on Imaging Sciences,
86 figure
Self-similar prior and wavelet bases for hidden incompressible turbulent motion
This work is concerned with the ill-posed inverse problem of estimating
turbulent flows from the observation of an image sequence. From a Bayesian
perspective, a divergence-free isotropic fractional Brownian motion (fBm) is
chosen as a prior model for instantaneous turbulent velocity fields. This
self-similar prior characterizes accurately second-order statistics of velocity
fields in incompressible isotropic turbulence. Nevertheless, the associated
maximum a posteriori involves a fractional Laplacian operator which is delicate
to implement in practice. To deal with this issue, we propose to decompose the
divergent-free fBm on well-chosen wavelet bases. As a first alternative, we
propose to design wavelets as whitening filters. We show that these filters are
fractional Laplacian wavelets composed with the Leray projector. As a second
alternative, we use a divergence-free wavelet basis, which takes implicitly
into account the incompressibility constraint arising from physics. Although
the latter decomposition involves correlated wavelet coefficients, we are able
to handle this dependence in practice. Based on these two wavelet
decompositions, we finally provide effective and efficient algorithms to
approach the maximum a posteriori. An intensive numerical evaluation proves the
relevance of the proposed wavelet-based self-similar priors.Comment: SIAM Journal on Imaging Sciences, 201
Accelerating proximal Markov chain Monte Carlo by using an explicit stabilised method
We present a highly efficient proximal Markov chain Monte Carlo methodology
to perform Bayesian computation in imaging problems. Similarly to previous
proximal Monte Carlo approaches, the proposed method is derived from an
approximation of the Langevin diffusion. However, instead of the conventional
Euler-Maruyama approximation that underpins existing proximal Monte Carlo
methods, here we use a state-of-the-art orthogonal Runge-Kutta-Chebyshev
stochastic approximation that combines several gradient evaluations to
significantly accelerate its convergence speed, similarly to accelerated
gradient optimisation methods. The proposed methodology is demonstrated via a
range of numerical experiments, including non-blind image deconvolution,
hyperspectral unmixing, and tomographic reconstruction, with total-variation
and -type priors. Comparisons with Euler-type proximal Monte Carlo
methods confirm that the Markov chains generated with our method exhibit
significantly faster convergence speeds, achieve larger effective sample sizes,
and produce lower mean square estimation errors at equal computational budget.Comment: 28 pages, 13 figures. Accepted for publication in SIAM Journal on
Imaging Sciences (SIIMS
- âŠ