167 research outputs found
Pinsker estimators for local helioseismology
A major goal of helioseismology is the three-dimensional reconstruction of
the three velocity components of convective flows in the solar interior from
sets of wave travel-time measurements. For small amplitude flows, the forward
problem is described in good approximation by a large system of convolution
equations. The input observations are highly noisy random vectors with a known
dense covariance matrix. This leads to a large statistical linear inverse
problem.
Whereas for deterministic linear inverse problems several computationally
efficient minimax optimal regularization methods exist, only one
minimax-optimal linear estimator exists for statistical linear inverse
problems: the Pinsker estimator. However, it is often computationally
inefficient because it requires a singular value decomposition of the forward
operator or it is not applicable because of an unknown noise covariance matrix,
so it is rarely used for real-world problems. These limitations do not apply in
helioseismology. We present a simplified proof of the optimality properties of
the Pinsker estimator and show that it yields significantly better
reconstructions than traditional inversion methods used in helioseismology,
i.e.\ Regularized Least Squares (Tikhonov regularization) and SOLA (approximate
inverse) methods.
Moreover, we discuss the incorporation of the mass conservation constraint in
the Pinsker scheme using staggered grids. With this improvement we can
reconstruct not only horizontal, but also vertical velocity components that are
much smaller in amplitude
Convergence rates of general regularization methods for statistical inverse problems and applications
During the past the convergence analysis for linear statistical inverse problems has mainly focused on spectral cut-off and Tikhonov type estimators. Spectral cut-off estimators achieve minimax rates for a broad range of smoothness classes and operators, but their practical usefulness is limited by the fact that they require a complete spectral decomposition of the operator. Tikhonov estimators are simpler to compute, but still involve the inversion of an operator and achieve minimax rates only in restricted smoothness classes. In this paper we introduce a unifying technique to study the mean square error of a large class of regularization methods (spectral methods) including the aforementioned estimators as well as many iterative methods, such as í-methods and the Landweber iteration. The latter estimators converge at the same rate as spectral cut-off, but only require matrixvector products. Our results are applied to various problems, in particular we obtain precise convergence rates for satellite gradiometry, L2-boosting, and errors in variable problems. --Statistical inverse problems,iterative regularization methods,Tikhonov regularization,nonparametric regression,minimax convergence rates,satellite gradiometry,Hilbert scales,boosting,errors in variable
Convergence rates of general regularization methods for statistical inverse problems and applications
During the past the convergence analysis for linear statistical inverse problems has mainly focused
on spectral cut-off and Tikhonov type estimators. Spectral cut-off estimators achieve minimax rates for a broad
range of smoothness classes and operators, but their practical usefulness is limited by the fact that they require
a complete spectral decomposition of the operator. Tikhonov estimators are simpler to compute, but still involve
the inversion of an operator and achieve minimax rates only in restricted smoothness classes. In this paper we
introduce a unifying technique to study the mean square error of a large class of regularization methods (spectral
methods) including the aforementioned estimators as well as many iterative methods, such as ν-methods and the
Landweber iteration. The latter estimators converge at the same rate as spectral cut-off, but only require matrixvector
products. Our results are applied to various problems, in particular we obtain precise convergence rates for
satellite gradiometry, L2-boosting, and errors in variable problems.
AMS subject classifications: 62G05, 62J05, 62P35, 65J10, 35R3
Convergence rates in expectation for Tikhonov-type regularization of Inverse Problems with Poisson data
In this paper we study a Tikhonov-type method for ill-posed nonlinear
operator equations \gdag = F(
ag) where \gdag is an integrable,
non-negative function. We assume that data are drawn from a Poisson process
with density t\gdag where may be interpreted as an exposure time. Such
problems occur in many photonic imaging applications including positron
emission tomography, confocal fluorescence microscopy, astronomic observations,
and phase retrieval problems in optics. Our approach uses a
Kullback-Leibler-type data fidelity functional and allows for general convex
penalty terms. We prove convergence rates of the expectation of the
reconstruction error under a variational source condition as both
for an a priori and for a Lepski{\u\i}-type parameter choice rule
Iteratively regularized Newton-type methods for general data misfit functionals and applications to Poisson data
We study Newton type methods for inverse problems described by nonlinear
operator equations in Banach spaces where the Newton equations
are regularized variationally using a general
data misfit functional and a convex regularization term. This generalizes the
well-known iteratively regularized Gauss-Newton method (IRGNM). We prove
convergence and convergence rates as the noise level tends to 0 both for an a
priori stopping rule and for a Lepski{\u\i}-type a posteriori stopping rule.
Our analysis includes previous order optimal convergence rate results for the
IRGNM as special cases. The main focus of this paper is on inverse problems
with Poisson data where the natural data misfit functional is given by the
Kullback-Leibler divergence. Two examples of such problems are discussed in
detail: an inverse obstacle scattering problem with amplitude data of the
far-field pattern and a phase retrieval problem. The performence of the
proposed method for these problems is illustrated in numerical examples
Attosecond electron pulse trains and quantum state reconstruction in ultrafast transmission electron microscopy
Ultrafast electron and X-ray imaging and spectroscopy are the basis for an ongoing revolution in the understanding of dynamical atomic-scale processes in matter. The underlying technology relies heavily on laser science for the generation and characterization of ever shorter pulses. Recent findings suggest that ultrafast electron microscopy with attosecond-structured wavefunctions may be feasible. However, such future technologies call for means to both prepare and fully analyse the corresponding free-electron quantum states. Here, we introduce a framework for the preparation, coherent manipulation and characterization of free-electron quantum states, experimentally demonstrating attosecond electron pulse trains. Phase-locked optical fields coherently control the electron wavefunction along the beam direction. We establish a new variant of quantum state tomography—‘SQUIRRELS’—for free-electron ensembles. The ability to tailor and quantitatively map electron quantum states will promote the nanoscale study of electron–matter entanglement and new forms of ultrafast electron microscopy down to the attosecond regime
The Iteratively Regularized Gau{\ss}-Newton Method with Convex Constraints and Applications in 4Pi-Microscopy
This paper is concerned with the numerical solution of nonlinear ill-posed
operator equations involving convex constraints. We study a Newton-type method
which consists in applying linear Tikhonov regularization with convex
constraints to the Newton equations in each iteration step. Convergence of this
iterative regularization method is analyzed if both the operator and the right
hand side are given with errors and all error levels tend to zero. Our study
has been motivated by the joint estimation of object and phase in 4Pi
microscopy, which leads to a semi-blind deconvolution problem with
nonnegativity constraints. The performance of the proposed algorithm is
illustrated both for simulated and for three-dimensional experimental data
Necessary conditions for variational regularization schemes
We study variational regularization methods in a general framework, more
precisely those methods that use a discrepancy and a regularization functional.
While several sets of sufficient conditions are known to obtain a
regularization method, we start with an investigation of the converse question:
How could necessary conditions for a variational method to provide a
regularization method look like? To this end, we formalize the notion of a
variational scheme and start with comparison of three different instances of
variational methods. Then we focus on the data space model and investigate the
role and interplay of the topological structure, the convergence notion and the
discrepancy functional. Especially, we deduce necessary conditions for the
discrepancy functional to fulfill usual continuity assumptions. The results are
applied to discrepancy functionals given by Bregman distances and especially to
the Kullback-Leibler divergence.Comment: To appear in Inverse Problem
- …