3,241 research outputs found
First order algorithms in variational image processing
Variational methods in imaging are nowadays developing towards a quite
universal and flexible tool, allowing for highly successful approaches on tasks
like denoising, deblurring, inpainting, segmentation, super-resolution,
disparity, and optical flow estimation. The overall structure of such
approaches is of the form ; where the functional is a data fidelity term also
depending on some input data and measuring the deviation of from such
and is a regularization functional. Moreover is a (often linear)
forward operator modeling the dependence of data on an underlying image, and
is a positive regularization parameter. While is often
smooth and (strictly) convex, the current practice almost exclusively uses
nonsmooth regularization functionals. The majority of successful techniques is
using nonsmooth and convex functionals like the total variation and
generalizations thereof or -norms of coefficients arising from scalar
products with some frame system. The efficient solution of such variational
problems in imaging demands for appropriate algorithms. Taking into account the
specific structure as a sum of two very different terms to be minimized,
splitting algorithms are a quite canonical choice. Consequently this field has
revived the interest in techniques like operator splittings or augmented
Lagrangians. Here we shall provide an overview of methods currently developed
and recent results as well as some computational studies providing a comparison
of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure
Regularization of the inverse medium problem : on nonstandard methods for sparse reconstruction
In this thesis, we investigate nonstandard methods for the stable solution of the inverse medium problem. Particularly, we consider the linearization of the model of the scattering process given by the Born approximation and investigate regularization methods that are designed for sparse reconstruction. In numerical experiments we demonstrate that sparsity constraints contribute to meaningful reconstructions from synthetic and even measurement data. In our investigations, we consider both iterative and variational methods for the solution of the inverse problem. Starting from the Landweber iteration, we discuss existing variants of this approach and develop a novel sparsity-enforcing method which is based on the Bregman projection. Furthermore, we consider a variational regularization scheme. First, we develop a novel parameter choice rule based on the L-curve criterion designed for sparse reconstruction. We then propose to replace the variational problem by some smooth approximation and provide an exhaustive investigation regarding stability of this approach. The theoretical investigations of each of the methods proposed in this work are complemented by a numerical evaluation
Globally Convergent Coderivative-Based Generalized Newton Methods in Nonsmooth Optimization
This paper proposes and justifies two globally convergent Newton-type methods
to solve unconstrained and constrained problems of nonsmooth optimization by
using tools of variational analysis and generalized differentiation. Both
methods are coderivative-based and employ generalized Hessians (coderivatives
of subgradient mappings) associated with objective functions, which are either
of class , or are represented in the form of convex
composite optimization, where one of the terms may be extended-real-valued. The
proposed globally convergent algorithms are of two types. The first one extends
the damped Newton method and requires positive-definiteness of the generalized
Hessians for its well-posedness and efficient performance, while the other
algorithm is of {the regularized Newton type} being well-defined when the
generalized Hessians are merely positive-semidefinite. The obtained convergence
rates for both methods are at least linear, but become superlinear under the
semismooth property of subgradient mappings. Problems of convex composite
optimization are investigated with and without the strong convexity assumption
{on smooth parts} of objective functions by implementing the machinery of
forward-backward envelopes. Numerical experiments are conducted for Lasso
problems and for box constrained quadratic programs with providing performance
comparisons of the new algorithms and some other first-order and second-order
methods that are highly recognized in nonsmooth optimization.Comment: arXiv admin note: text overlap with arXiv:2101.1055
Optimization Methods for Inverse Problems
Optimization plays an important role in solving many inverse problems.
Indeed, the task of inversion often either involves or is fully cast as a
solution of an optimization problem. In this light, the mere non-linear,
non-convex, and large-scale nature of many of these inversions gives rise to
some very challenging optimization problems. The inverse problem community has
long been developing various techniques for solving such optimization tasks.
However, other, seemingly disjoint communities, such as that of machine
learning, have developed, almost in parallel, interesting alternative methods
which might have stayed under the radar of the inverse problem community. In
this survey, we aim to change that. In doing so, we first discuss current
state-of-the-art optimization methods widely used in inverse problems. We then
survey recent related advances in addressing similar challenges in problems
faced by the machine learning community, and discuss their potential advantages
for solving inverse problems. By highlighting the similarities among the
optimization challenges faced by the inverse problem and the machine learning
communities, we hope that this survey can serve as a bridge in bringing
together these two communities and encourage cross fertilization of ideas.Comment: 13 page
Provably Convergent Plug-and-Play Quasi-Newton Methods
Plug-and-Play (PnP) methods are a class of efficient iterative methods that aim to combine data fidelity terms and deep denoisers using classical optimization algorithms, such as ISTA or ADMM, with applications in inverse problems and imaging. Provable PnP methods are a subclass of PnP methods with convergence guarantees, such as fixed point convergence or convergence to critical points of some energy function. Many existing provable PnP methods impose heavy restrictions on the denoiser or fidelity function, such as non-expansiveness or strict convexity, respectively. In this work, we propose a novel algorithmic approach incorporating quasi-Newton steps into a provable PnP framework based on proximal denoisers, resulting in greatly accelerated convergence while retaining light assumptions on the denoiser. By characterizing the denoiser as the proximal operator of a weakly convex function, we show that the fixed points of the proposed quasi-Newton PnP algorithm are critical points of a weakly convex function. Numerical experiments on image deblurring and super-resolution demonstrate 2--8x faster convergence as compared to other provable PnP methods with similar reconstruction quality
Provably Convergent Plug-and-Play Quasi-Newton Methods
Plug-and-Play (PnP) methods are a class of efficient iterative methods that aim to combine data fidelity terms and deep denoisers using classical optimization algorithms, such as ISTA or ADMM, with applications in inverse problems and imaging. Provable PnP methods are a subclass of PnP methods with convergence guarantees, such as fixed point convergence or convergence to critical points of some energy function. Many existing provable PnP methods impose heavy restrictions on the denoiser or fidelity function, such as non-expansiveness or strict convexity, respectively. In this work, we propose a novel algorithmic approach incorporating quasi-Newton steps into a provable PnP framework based on proximal denoisers, resulting in greatly accelerated convergence while retaining light assumptions on the denoiser. By characterizing the denoiser as the proximal operator of a weakly convex function, we show that the fixed points of the proposed quasi-Newton PnP algorithm are critical points of a weakly convex function. Numerical experiments on image deblurring and super-resolution demonstrate 2--8x faster convergence as compared to other provable PnP methods with similar reconstruction quality
- ā¦