68 research outputs found
A variational algorithm for the detection of line segments
In this paper we propose an algorithm for the detection of edges in images
that is based on topological asymptotic analysis. Motivated from the
Mumford--Shah functional, we consider a variational functional that penalizes
oscillations outside some approximate edge set, which we represent as the union
of a finite number of thin strips, the width of which is an order of magnitude
smaller than their length. In order to find a near optimal placement of these
strips, we compute an asymptotic expansion of the functional with respect to
the strip size. This expansion is then employed for defining a (topological)
gradient descent like minimization method. As opposed to a recently proposed
method by some of the authors, which uses coverings with balls, the usage of
strips includes some directional information into the method, which can be used
for obtaining finer edges and can also result in a reduction of computation
times
Regularization of Linear Ill-posed Problems by the Augmented Lagrangian Method and Variational Inequalities
We study the application of the Augmented Lagrangian Method to the solution
of linear ill-posed problems. Previously, linear convergence rates with respect
to the Bregman distance have been derived under the classical assumption of a
standard source condition. Using the method of variational inequalities, we
extend these results in this paper to convergence rates of lower order, both
for the case of an a priori parameter choice and an a posteriori choice based
on Morozov's discrepancy principle. In addition, our approach allows the
derivation of convergence rates with respect to distance measures different
from the Bregman distance. As a particular application, we consider sparsity
promoting regularization, where we derive a range of convergence rates with
respect to the norm under the assumption of restricted injectivity in
conjunction with generalized source conditions of H\"older type
Nonparametric instrumental regression with non-convex constraints
This paper considers the nonparametric regression model with an additive
error that is dependent on the explanatory variables. As is common in empirical
studies in epidemiology and economics, it also supposes that valid instrumental
variables are observed. A classical example in microeconomics considers the
consumer demand function as a function of the price of goods and the income,
both variables often considered as endogenous. In this framework, the economic
theory also imposes shape restrictions on the demand function, like
integrability conditions. Motivated by this illustration in microeconomics, we
study an estimator of a nonparametric constrained regression function using
instrumental variables by means of Tikhonov regularization. We derive rates of
convergence for the regularized model both in a deterministic and stochastic
setting under the assumption that the true regression function satisfies a
projected source condition including, because of the non-convexity of the
imposed constraints, an additional smallness condition
Necessary conditions for variational regularization schemes
We study variational regularization methods in a general framework, more
precisely those methods that use a discrepancy and a regularization functional.
While several sets of sufficient conditions are known to obtain a
regularization method, we start with an investigation of the converse question:
How could necessary conditions for a variational method to provide a
regularization method look like? To this end, we formalize the notion of a
variational scheme and start with comparison of three different instances of
variational methods. Then we focus on the data space model and investigate the
role and interplay of the topological structure, the convergence notion and the
discrepancy functional. Especially, we deduce necessary conditions for the
discrepancy functional to fulfill usual continuity assumptions. The results are
applied to discrepancy functionals given by Bregman distances and especially to
the Kullback-Leibler divergence.Comment: To appear in Inverse Problem
Elastic-Net Regularization: Error estimates and Active Set Methods
This paper investigates theoretical properties and efficient numerical
algorithms for the so-called elastic-net regularization originating from
statistics, which enforces simultaneously l^1 and l^2 regularization. The
stability of the minimizer and its consistency are studied, and convergence
rates for both a priori and a posteriori parameter choice rules are
established. Two iterative numerical algorithms of active set type are
proposed, and their convergence properties are discussed. Numerical results are
presented to illustrate the features of the functional and algorithms
Convergence rates in expectation for Tikhonov-type regularization of Inverse Problems with Poisson data
In this paper we study a Tikhonov-type method for ill-posed nonlinear
operator equations \gdag = F(
ag) where \gdag is an integrable,
non-negative function. We assume that data are drawn from a Poisson process
with density t\gdag where may be interpreted as an exposure time. Such
problems occur in many photonic imaging applications including positron
emission tomography, confocal fluorescence microscopy, astronomic observations,
and phase retrieval problems in optics. Our approach uses a
Kullback-Leibler-type data fidelity functional and allows for general convex
penalty terms. We prove convergence rates of the expectation of the
reconstruction error under a variational source condition as both
for an a priori and for a Lepski{\u\i}-type parameter choice rule
Beyond convergence rates: Exact recovery with Tikhonov regularization with sparsity constraints
The Tikhonov regularization of linear ill-posed problems with an
penalty is considered. We recall results for linear convergence rates and
results on exact recovery of the support. Moreover, we derive conditions for
exact support recovery which are especially applicable in the case of ill-posed
problems, where other conditions, e.g. based on the so-called coherence or the
restricted isometry property are usually not applicable. The obtained results
also show that the regularized solutions do not only converge in the
-norm but also in the vector space (when considered as the
strict inductive limit of the spaces as tends to infinity).
Additionally, the relations between different conditions for exact support
recovery and linear convergence rates are investigated.
With an imaging example from digital holography the applicability of the
obtained results is illustrated, i.e. that one may check a priori if the
experimental setup guarantees exact recovery with Tikhonov regularization with
sparsity constraints
Necessary and sufficient conditions of solution uniqueness in minimization
This paper shows that the solutions to various convex minimization
problems are \emph{unique} if and only if a common set of conditions are
satisfied. This result applies broadly to the basis pursuit model, basis
pursuit denoising model, Lasso model, as well as other models that
either minimize or impose the constraint , where
is a strictly convex function. For these models, this paper proves that,
given a solution and defining I=\supp(x^*) and s=\sign(x^*_I),
is the unique solution if and only if has full column rank and there
exists such that and for . This
condition is previously known to be sufficient for the basis pursuit model to
have a unique solution supported on . Indeed, it is also necessary, and
applies to a variety of other models. The paper also discusses ways to
recognize unique solutions and verify the uniqueness conditions numerically.Comment: 6 pages; revised version; submitte
Discretization of variational regularization in Banach spaces
Consider a nonlinear ill-posed operator equation where is
defined on a Banach space . In general, for solving this equation
numerically, a finite dimensional approximation of and an approximation of
are required. Moreover, in general the given data \yd of are noisy.
In this paper we analyze finite dimensional variational regularization, which
takes into account operator approximations and noisy data: We show
(semi-)convergence of the regularized solution of the finite dimensional
problems and establish convergence rates in terms of Bregman distances under
appropriate sourcewise representation of a solution of the equation. The more
involved case of regularization in nonseparable Banach spaces is discussed in
detail. In particular we consider the space of finite total variation
functions, the space of functions of finite bounded deformation, and the
--space
- âŠ