11 research outputs found
A Bregman-Kaczmarz method for nonlinear systems of equations
We propose a new randomized method for solving systems of nonlinear
equations, which can find sparse solutions or solutions under certain simple
constraints. The scheme only takes gradients of component functions and uses
Bregman projections onto the solution space of a Newton equation. In the
special case of euclidean projections, the method is known as nonlinear
Kaczmarz method. Furthermore, if the component functions are nonnegative, we
are in the setting of optimization under the interpolation assumption and the
method reduces to SGD with the recently proposed stochastic Polyak step size.
For general Bregman projections, our method is a stochastic mirror descent with
a novel adaptive step size. We prove that in the convex setting each iteration
of our method results in a smaller Bregman distance to exact solutions as
compared to the standard Polyak step. Our generalization to Bregman projections
comes with the price that a convex one-dimensional optimization problem needs
to be solved in each iteration. This can typically be done with globalized
Newton iterations. Convergence is proved in two classical settings of
nonlinearity: for convex nonnegative functions and locally for functions which
fulfill the tangential cone condition. Finally, we show examples in which the
proposed method outperforms similar methods with the same memory requirements
About a deficit in low order convergence rates on the example of autoconvolution
We revisit in L2-spaces the autoconvolution equation x ∗ x = y with solutions which are real-valued or complex-valued functions x(t) defined on a finite real interval, say t ∈ [0,1]. Such operator equations of quadratic type occur in physics of spectra, in optics and in stochastics, often as part of a more complex task. Because of their weak nonlinearity deautoconvolution problems are not seen as difficult and hence little attention is paid to them wrongly. In this paper, we will indicate on the example of autoconvolution a deficit in low order convergence rates for regularized solutions of nonlinear ill-posed operator equations F(x)=y with solutions x†in a Hilbert space setting. So for the real-valued version of the deautoconvolution problem, which is locally ill-posed everywhere, the classical convergence rate theory developed for the Tikhonov regularization of nonlinear ill-posed problems reaches its limits if standard source conditions using the range of F (x†)∗ fail. On the other hand, convergence rate results based on Hölder source conditions with small Hölder exponent and logarithmic source conditions or on the method of approximate source conditions are not applicable since qualified nonlinearity conditions are required which cannot be shown for the autoconvolution case according to current knowledge. We also discuss the complex-valued version of autoconvolution with full data on [0,2] and see that ill-posedness must be expected if unbounded amplitude functions are admissible. As a new detail, we present situations of local well-posedness if the domain of the autoconvolution operator is restricted to complex L2-functions with a fixed and uniformly bounded modulus function
A Multiplicative Regularisation for Inverse Problems
This thesis considers self-adaptive regularisation methods, focusing particularly on new,
multiplicative methods, in which the cost functional is constructed as a product of two terms,
rather than the more usual sum of a fidelity term and a regularisation term.
By re-formulating the multiplicative regularisation model in the framework of the alternating
minimisation algorithm, we were able to obtain a series of rigorous theoretical results,
as well as formulating a number of new models in both multiplicative and additive form.
The first two chapters of my thesis set the scene of my research. Chapter 1 gives a
general review of the field of inverse problems and common regularisation strategies, while
Chapter 2 provides relevant technical details as mathematical preliminaries. The multiplicative
regularisation model by Abubakar et al (2004) falls into the category of self-adaptive
methods, where the regularisation strength is automatically adjusted in the model. By investigating
the model and implementing it on various examples, I demonstrated its power
for deblurring piecewise constant images with the presence of noise with high amplitude
and various distributions (Chapter 3). I also discovered a possible improvement of this
model by the introduction an extra parameter μ, and came up with a formula to determine its
most appropriate value in a straightforward manner. The derivation and numerical validation
or this formula is presented in Chapter 4. This parameter μ supplements Abubakar’s
multiplicative method, and plays an important role in the model: it enables the multiplicative
model to reach its full potential, without adding any significant effort in parameter tuning.
Despite its numerical strength, there are barely any theoretical results regarding the
multiplicative type of regularisation, which motivates me to carry out further research in
this aspect. Inspired by Charbonnier et al (1997) who provided an additive model with
regularisation strength spatially controlled by a sequence of self-adapted weight functions
bn, I re-formulated the multiplicative regularisation model in the framework of alternating
minimisation algorithm. This results in a series of new models of the multiplicative type.
In Chapter 5 I presented two new models MMR and MSSP equipped with two-step and
three-step alternating minimization algorithm respectively. The scaling parameter δ is fixed
in the former model while it is self-adaptive based on an additional recurrence relation in the
latter model. In both models, the objective cost functional Cn is monotonically decreasing
and convergent, while the image intensity un exhibits semi-convergence nature. Both models
are capable of incorporating different potential functions in the objective cost functional,
and require no extra tuning parameter μ in the algorithm. Numerically they exhibit similar
behaviours as Abubakar’s multiplicative method in terms of high noise level tolerance and
robustness over different noise distributions.
In Chapter 6 I presented a third, enhanced multiplicative model (EMM), which employs
not only a three-step minimisation with self-adaptive weight function bn and scaling parameter
δn, but also the same augmented recurrence relation as discussed in Chapter 4 with
steering parameter μ. This model leads to promising results both theoretically and numerically.
It is a novel approach with enhanced performance exceeding all the multiplicative type
of models presented in this dissertation.Cambridge Trust Scholarship awarded by Cambridge Commonwealth, European and International Trust, 2014
Trinity Hall Research Studentship awarded by Trinity Hall College, 201