143,651 research outputs found

    Error-propagation in weakly nonlinear inverse problems

    Get PDF
    In applications of inversion methods to real data, nonlinear inverse problems are often simplied to more easily solvable linearized inverse problems. By doing so one introduces an error made by the linearization. Nonlinear inverse methods are more accurate because the methods that are used are more correct from a physical point of view. However, if data are used that have a statistical error, nonlinear inversion methods lead to a bias in the retrieved model parameters, caused the by nonlinear propagation of errors. If the bias in the estimated model parameters is larger than the linearization error, a linearized inverse problem leads to better estimation of the model parameter. In this paper the error-propagation is investigated for inversion methods that account the nonlinearity quadratically

    Nonlinear estimation for linear inverse problems with error in the operator

    Full text link
    We study two nonlinear methods for statistical linear inverse problems when the operator is not known. The two constructions combine Galerkin regularization and wavelet thresholding. Their performances depend on the underlying structure of the operator, quantified by an index of sparsity. We prove their rate-optimality and adaptivity properties over Besov classes.Comment: Published in at http://dx.doi.org/10.1214/009053607000000721 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Optimal solution error covariance in highly nonlinear problems of variational data assimilation

    Get PDF
    The problem of variational data assimilation for a nonlinear evolution model is formulated as an optimal control problem (see, e.g.[1]) to find the initial condition, boundary conditions or model parameters. The input data contain observation and background errors, hence there is an error in the optimal solution. For mildly nonlinear dynamics, the covariance matrix of the optimal solution error can be approximated by the inverse Hessian of the cost functional of an auxiliary data assimilation problem ([2], [3]). The relationship between the optimal solution error covariance matrix and the Hessian of the auxiliary control problem is discussed for different degrees of validity of the tangent linear hypothesis. For problems with strongly nonlinear dynamics a new statistical method based on computation of a sample of inverse Hessians is suggested. This method relies on the efficient computation of the inverse Hessian by means of iterative methods (Lanczos and quasi-Newton BFGS) with preconditioning. The method allows us to get a sensible approximation of the posterior covariance matrix with a small sample size. Numerical examples are presented for the model governed by Burgers equation with a nonlinear viscous term. The first author acknowledges the funding through the project 09-01-00284 of the Russian Foundation for Basic Research, and the FCP program "Kadry"

    Nonlinear estimation for linear inverse problems with error in the operator

    Get PDF
    We consider nonlinear estimation methods for statistical inverse problems in the case where the operator is not exactly known. For a canonical formulation a Gaussian operator white noise framework is developed. Two different nonlinear estimators are constructed, which correspond to the different order of the linear inversion and nonlinear smoothing step. We show that both estimators are rate-optimal over a wide range of Besov smoothness classes. The construction is based on the Galerkin projection method and wavelet thresholding schemes for the data and the operator

    Convergence rates for variational regularization of inverse problems in exponential families

    Get PDF
    We consider statistical inverse problems with statistical noise. By using regularization methods one can approximate the true solution of the inverse problem by a regularized solution. The previous investigation of convergence rates for variational regularization with Poisson and empirical process data is shown to be suboptimal. In this thesis we obtain improved convergence rates for variational regularization methods of nonlinear ill-posed inverse problems with certain stochastic noise models described by exponential families and derive better reconstruction error bounds by applying deviation inequalities for stochastic process in some function spaces. Furthermore, we also consider iteratively regularized Newton-method as an alternative while the operator is non-linear. Due to the difficulty of deriving suitable deviation inequalities for stochastic processes in some function spaces, we are currently not able to obtain optimal convergence rates for variational regularization such that we state our desired result as a conjecture. If our conjecture holds true, then we can immediately obtain our desired results

    Interior-point solver for convex separable block-angular problems

    Get PDF
    Constraints matrices with block-angular structures are pervasive in Optimization. Interior-point methods have shown to be competitive for these structured problems by exploiting the linear algebra. One of these approaches solved the normal equations using sparse Cholesky factorizations for the block constraints, and a preconditioned conjugate gradient (PCG) for the linking constraints. The preconditioner is based on a power series expansion which approximates the inverse of the matrix of the linking constraints system. In this work we present an efficient solver based on this algorithm. Some of its features are: it solves linearly constrained convex separable problems (linear, quadratic or nonlinear); both Newton and second-order predictor-corrector directions can be used, either with the Cholesky+PCG scheme or with a Cholesky factorization of normal equations; the preconditioner may include any number of terms of the power series; for any number of these terms, it estimates the spectral radius of the matrix in the power series (which is instrumental for the quality of the precondi- tioner). The solver has been hooked to SML, a structure-conveying modelling language based on the popular AMPL modeling language. Computational results are reported for some large and/or difficult instances in the literature: (1) multicommodity flow problems; (2) minimum congestion problems; (3) statistical data protection problems using l1 and l2 distances (which are linear and quadratic problems, respectively), and the pseudo-Huber function, a nonlinear approximation to l1 which improves the preconditioner. In the largest instances, of up to 25 millions of variables and 300000 constraints, this approach is from two to three orders of magnitude faster than state-of-the-art linear and quadratic optimization solvers.Preprin

    Modern Regularization Methods for Inverse Problems

    Get PDF
    Regularization methods are a key tool in the solution of inverse problems. They are used to introduce prior knowledge and allow a robust approximation of ill-posed (pseudo-) inverses. In the last two decades interest has shifted from linear to nonlinear regularization methods, even for linear inverse problems. The aim of this paper is to provide a reasonably comprehensive overview of this shift towards modern nonlinear regularization methods, including their analysis, applications and issues for future research. In particular we will discuss variational methods and techniques derived from them, since they have attracted much recent interest and link to other fields, such as image processing and compressed sensing. We further point to developments related to statistical inverse problems, multiscale decompositions and learning theory.Leverhulme Trust Early Career Fellowship ‘Learning from mistakes: a supervised feedback-loop for imaging applications’ Isaac Newton Trust Cantab Capital Institute for the Mathematics of Information ERC Grant EU FP 7 - ERC Consolidator Grant 615216 LifeInverse German Ministry for Science and Education (BMBF) project MED4D EPSRC grant EP/K032208/
    corecore