6,887 research outputs found
A Total Fractional-Order Variation Model for Image Restoration with Non-homogeneous Boundary Conditions and its Numerical Solution
To overcome the weakness of a total variation based model for image
restoration, various high order (typically second order) regularization models
have been proposed and studied recently. In this paper we analyze and test a
fractional-order derivative based total -order variation model, which
can outperform the currently popular high order regularization models. There
exist several previous works using total -order variations for image
restoration; however first no analysis is done yet and second all tested
formulations, differing from each other, utilize the zero Dirichlet boundary
conditions which are not realistic (while non-zero boundary conditions violate
definitions of fractional-order derivatives). This paper first reviews some
results of fractional-order derivatives and then analyzes the theoretical
properties of the proposed total -order variational model rigorously.
It then develops four algorithms for solving the variational problem, one based
on the variational Split-Bregman idea and three based on direct solution of the
discretise-optimization problem. Numerical experiments show that, in terms of
restoration quality and solution efficiency, the proposed model can produce
highly competitive results, for smooth images, to two established high order
models: the mean curvature and the total generalized variation.Comment: 26 page
Optimising Spatial and Tonal Data for PDE-based Inpainting
Some recent methods for lossy signal and image compression store only a few
selected pixels and fill in the missing structures by inpainting with a partial
differential equation (PDE). Suitable operators include the Laplacian, the
biharmonic operator, and edge-enhancing anisotropic diffusion (EED). The
quality of such approaches depends substantially on the selection of the data
that is kept. Optimising this data in the domain and codomain gives rise to
challenging mathematical problems that shall be addressed in our work.
In the 1D case, we prove results that provide insights into the difficulty of
this problem, and we give evidence that a splitting into spatial and tonal
(i.e. function value) optimisation does hardly deteriorate the results. In the
2D setting, we present generic algorithms that achieve a high reconstruction
quality even if the specified data is very sparse. To optimise the spatial
data, we use a probabilistic sparsification, followed by a nonlocal pixel
exchange that avoids getting trapped in bad local optima. After this spatial
optimisation we perform a tonal optimisation that modifies the function values
in order to reduce the global reconstruction error. For homogeneous diffusion
inpainting, this comes down to a least squares problem for which we prove that
it has a unique solution. We demonstrate that it can be found efficiently with
a gradient descent approach that is accelerated with fast explicit diffusion
(FED) cycles. Our framework allows to specify the desired density of the
inpainting mask a priori. Moreover, is more generic than other data
optimisation approaches for the sparse inpainting problem, since it can also be
extended to nonlinear inpainting operators such as EED. This is exploited to
achieve reconstructions with state-of-the-art quality.
We also give an extensive literature survey on PDE-based image compression
methods
Dynamic sampling schemes for optimal noise learning under multiple nonsmooth constraints
We consider the bilevel optimisation approach proposed by De Los Reyes,
Sch\"onlieb (2013) for learning the optimal parameters in a Total Variation
(TV) denoising model featuring for multiple noise distributions. In
applications, the use of databases (dictionaries) allows an accurate estimation
of the parameters, but reflects in high computational costs due to the size of
the databases and to the nonsmooth nature of the PDE constraints. To overcome
this computational barrier we propose an optimisation algorithm that by
sampling dynamically from the set of constraints and using a quasi-Newton
method, solves the problem accurately and in an efficient way
Optimization Methods for Inverse Problems
Optimization plays an important role in solving many inverse problems.
Indeed, the task of inversion often either involves or is fully cast as a
solution of an optimization problem. In this light, the mere non-linear,
non-convex, and large-scale nature of many of these inversions gives rise to
some very challenging optimization problems. The inverse problem community has
long been developing various techniques for solving such optimization tasks.
However, other, seemingly disjoint communities, such as that of machine
learning, have developed, almost in parallel, interesting alternative methods
which might have stayed under the radar of the inverse problem community. In
this survey, we aim to change that. In doing so, we first discuss current
state-of-the-art optimization methods widely used in inverse problems. We then
survey recent related advances in addressing similar challenges in problems
faced by the machine learning community, and discuss their potential advantages
for solving inverse problems. By highlighting the similarities among the
optimization challenges faced by the inverse problem and the machine learning
communities, we hope that this survey can serve as a bridge in bringing
together these two communities and encourage cross fertilization of ideas.Comment: 13 page
- …