46,320 research outputs found

    Regularization of statistical inverse problems and the Bakushinskii veto

    Get PDF
    In the deterministic context Bakushinskii's theorem excludes the existence of purely data driven convergent regularization for ill-posed problems. We will prove in the present work that in the statistical setting we can either construct a counter example or develop an equivalent formulation depending on the considered class of probability distributions. Hence, Bakushinskii's theorem does not generalize to the statistical context, although this has often been assumed in the past. To arrive at this conclusion, we will deduce from the classic theory new concepts for a general study of statistical inverse problems and perform a systematic clarification of the key ideas of statistical regularization.Comment: 20 page

    Convergence rates of general regularization methods for statistical inverse problems and applications

    Get PDF
    During the past the convergence analysis for linear statistical inverse problems has mainly focused on spectral cut-off and Tikhonov type estimators. Spectral cut-off estimators achieve minimax rates for a broad range of smoothness classes and operators, but their practical usefulness is limited by the fact that they require a complete spectral decomposition of the operator. Tikhonov estimators are simpler to compute, but still involve the inversion of an operator and achieve minimax rates only in restricted smoothness classes. In this paper we introduce a unifying technique to study the mean square error of a large class of regularization methods (spectral methods) including the aforementioned estimators as well as many iterative methods, such as í-methods and the Landweber iteration. The latter estimators converge at the same rate as spectral cut-off, but only require matrixvector products. Our results are applied to various problems, in particular we obtain precise convergence rates for satellite gradiometry, L2-boosting, and errors in variable problems. --Statistical inverse problems,iterative regularization methods,Tikhonov regularization,nonparametric regression,minimax convergence rates,satellite gradiometry,Hilbert scales,boosting,errors in variable

    Pinsker estimators for local helioseismology

    Full text link
    A major goal of helioseismology is the three-dimensional reconstruction of the three velocity components of convective flows in the solar interior from sets of wave travel-time measurements. For small amplitude flows, the forward problem is described in good approximation by a large system of convolution equations. The input observations are highly noisy random vectors with a known dense covariance matrix. This leads to a large statistical linear inverse problem. Whereas for deterministic linear inverse problems several computationally efficient minimax optimal regularization methods exist, only one minimax-optimal linear estimator exists for statistical linear inverse problems: the Pinsker estimator. However, it is often computationally inefficient because it requires a singular value decomposition of the forward operator or it is not applicable because of an unknown noise covariance matrix, so it is rarely used for real-world problems. These limitations do not apply in helioseismology. We present a simplified proof of the optimality properties of the Pinsker estimator and show that it yields significantly better reconstructions than traditional inversion methods used in helioseismology, i.e.\ Regularized Least Squares (Tikhonov regularization) and SOLA (approximate inverse) methods. Moreover, we discuss the incorporation of the mass conservation constraint in the Pinsker scheme using staggered grids. With this improvement we can reconstruct not only horizontal, but also vertical velocity components that are much smaller in amplitude

    Convergence rates for variational regularization of inverse problems in exponential families

    Get PDF
    We consider statistical inverse problems with statistical noise. By using regularization methods one can approximate the true solution of the inverse problem by a regularized solution. The previous investigation of convergence rates for variational regularization with Poisson and empirical process data is shown to be suboptimal. In this thesis we obtain improved convergence rates for variational regularization methods of nonlinear ill-posed inverse problems with certain stochastic noise models described by exponential families and derive better reconstruction error bounds by applying deviation inequalities for stochastic process in some function spaces. Furthermore, we also consider iteratively regularized Newton-method as an alternative while the operator is non-linear. Due to the difficulty of deriving suitable deviation inequalities for stochastic processes in some function spaces, we are currently not able to obtain optimal convergence rates for variational regularization such that we state our desired result as a conjecture. If our conjecture holds true, then we can immediately obtain our desired results

    IDENTIFICATION AND ESTIMATION OF NONPARAMETRIC STRUCTURAL

    Get PDF
    This paper concerns a new statistical approach to instrumental variables (IV) method for nonparametric structural models with additive errors. A general identifying condition of the model is proposed, based on richness of the space generated by marginal discretizations of joint density functions. For consistent estimation, we develop statistical regularization theory to solve a random Fredholm integral equation of the first kind. A\ minimal set of conditions are given for consistency of a general regularization method. Using an abstract smoothness condition, we derive some optimal bounds, given the accuracies of preliminary estimates, and show the convergence rates of various regularization methods, including (the ordinary/iterated/generalized) Tikhonov and Showalter's methods. An application of the general regularization theory is discussed with a focus on a kernel smoothing method. We show an exact closed form, as well as the optimal convergence rate, of the kernel IV estimates of various regularization methods. The finite sample properties of the estimates are investigated via a small-scale Monte Carlo experimentNonparametric Strucutral Models, IV estimation, Statistical inverse problems

    Variational Downscaling, Fusion and Assimilation of Hydrometeorological States via Regularized Estimation

    Full text link
    Improved estimation of hydrometeorological states from down-sampled observations and background model forecasts in a noisy environment, has been a subject of growing research in the past decades. Here, we introduce a unified framework that ties together the problems of downscaling, data fusion and data assimilation as ill-posed inverse problems. This framework seeks solutions beyond the classic least squares estimation paradigms by imposing proper regularization, which are constraints consistent with the degree of smoothness and probabilistic structure of the underlying state. We review relevant regularization methods in derivative space and extend classic formulations of the aforementioned problems with particular emphasis on hydrologic and atmospheric applications. Informed by the statistical characteristics of the state variable of interest, the central results of the paper suggest that proper regularization can lead to a more accurate and stable recovery of the true state and hence more skillful forecasts. In particular, using the Tikhonov and Huber regularization in the derivative space, the promise of the proposed framework is demonstrated in static downscaling and fusion of synthetic multi-sensor precipitation data, while a data assimilation numerical experiment is presented using the heat equation in a variational setting

    Nonlinear estimation for linear inverse problems with error in the operator

    Full text link
    We study two nonlinear methods for statistical linear inverse problems when the operator is not known. The two constructions combine Galerkin regularization and wavelet thresholding. Their performances depend on the underlying structure of the operator, quantified by an index of sparsity. We prove their rate-optimality and adaptivity properties over Besov classes.Comment: Published in at http://dx.doi.org/10.1214/009053607000000721 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore