24 research outputs found

    Likelihood-informed dimension reduction for nonlinear inverse problems

    Get PDF
    The intrinsic dimensionality of an inverse problem is affected by prior information, the accuracy and number of observations, and the smoothing properties of the forward operator. From a Bayesian perspective, changes from the prior to the posterior may, in many problems, be confined to a relatively low-dimensional subspace of the parameter space. We present a dimension reduction approach that defines and identifies such a subspace, called the "likelihood-informed subspace" (LIS), by characterizing the relative influences of the prior and the likelihood over the support of the posterior distribution. This identification enables new and more efficient computational methods for Bayesian inference with nonlinear forward models and Gaussian priors. In particular, we approximate the posterior distribution as the product of a lower-dimensional posterior defined on the LIS and the prior distribution marginalized onto the complementary subspace. Markov chain Monte Carlo sampling can then proceed in lower dimensions, with significant gains in computational efficiency. We also introduce a Rao-Blackwellization strategy that de-randomizes Monte Carlo estimates of posterior expectations for additional variance reduction. We demonstrate the efficiency of our methods using two numerical examples: inference of permeability in a groundwater system governed by an elliptic PDE, and an atmospheric remote sensing problem based on Global Ozone Monitoring System (GOMOS) observations

    Optimal low-rank approximations of Bayesian linear inverse problems

    Full text link
    In the Bayesian approach to inverse problems, data are often informative, relative to the prior, only on a low-dimensional subspace of the parameter space. Significant computational savings can be achieved by using this subspace to characterize and approximate the posterior distribution of the parameters. We first investigate approximation of the posterior covariance matrix as a low-rank update of the prior covariance matrix. We prove optimality of a particular update, based on the leading eigendirections of the matrix pencil defined by the Hessian of the negative log-likelihood and the prior precision, for a broad class of loss functions. This class includes the F\"{o}rstner metric for symmetric positive definite matrices, as well as the Kullback-Leibler divergence and the Hellinger distance between the associated distributions. We also propose two fast approximations of the posterior mean and prove their optimality with respect to a weighted Bayes risk under squared-error loss. These approximations are deployed in an offline-online manner, where a more costly but data-independent offline calculation is followed by fast online evaluations. As a result, these approximations are particularly useful when repeated posterior mean evaluations are required for multiple data sets. We demonstrate our theoretical results with several numerical examples, including high-dimensional X-ray tomography and an inverse heat conduction problem. In both of these examples, the intrinsic low-dimensional structure of the inference problem can be exploited while producing results that are essentially indistinguishable from solutions computed in the full space

    On dimension reduction in Gaussian filters

    Full text link
    A priori dimension reduction is a widely adopted technique for reducing the computational complexity of stationary inverse problems. In this setting, the solution of an inverse problem is parameterized by a low-dimensional basis that is often obtained from the truncated Karhunen-Loeve expansion of the prior distribution. For high-dimensional inverse problems equipped with smoothing priors, this technique can lead to drastic reductions in parameter dimension and significant computational savings. In this paper, we extend the concept of a priori dimension reduction to non-stationary inverse problems, in which the goal is to sequentially infer the state of a dynamical system. Our approach proceeds in an offline-online fashion. We first identify a low-dimensional subspace in the state space before solving the inverse problem (the offline phase), using either the method of "snapshots" or regularized covariance estimation. Then this subspace is used to reduce the computational complexity of various filtering algorithms - including the Kalman filter, extended Kalman filter, and ensemble Kalman filter - within a novel subspace-constrained Bayesian prediction-and-update procedure (the online phase). We demonstrate the performance of our new dimension reduction approach on various numerical examples. In some test cases, our approach reduces the dimensionality of the original problem by orders of magnitude and yields up to two orders of magnitude in computational savings

    Undersampled Dynamic X-Ray Tomography With Dimension Reduction Kalman Filter

    Get PDF
    In this paper, we propose a prior-based dimension reduction Kalman filter for undersampled dynamic X-ray tomography. With this method, the X-ray reconstructions are parameterized by a low-dimensional basis. Thus, the proposed method is computationally very light, and extremely robust as all the computations can be done explicitly. With real and simulated measurement data, we show that the method provides accurate reconstructions even with very limited number of angular directions.Peer reviewe

    Stabilized BFGS approximate Kalman filter

    No full text
    The Kalman filter (KF) and Extended Kalman filter (EKF) are well-known tools for assimilating data and model predictions. The filters require storage and multiplication of n × n and n × m matrices and inversion of m × m matrices, where n is the dimension of the state space and m is dimension of the observation space. Therefore, implementation of KF or EKF becomes impractical when dimensions increase. The earlier works provide optimization-based approximative low-memory approaches that enable filtering in high dimensions. However, these versions ignore numerical issues that deteriorate performance of the approximations: accumulating errors may cause the covariance approximations to lose non-negative definiteness, and approximative inversion of large close-to-singular covariances gets tedious. Here we introduce a formulation that avoids these problems. We employ L-BFGS formula to get low-memory representations of the large matrices that appear in EKF, but inject a stabilizing correction to ensure that the resulting approximative representations remain non-negative definite. The correction applies to any symmetric covariance approximation, and can be seen as a generalization of the Joseph covariance update. We prove that the stabilizing correction enhances convergence rate of the covariance approximations. Moreover, we generalize the idea by the means of Newton-Schultz matrix inversion formulae, which allows to employ them and their generalizations as stabilizing corrections.Finnish Academy. Centre of Excellence in Inverse Problems (Project 134937

    Randomize-Then-Optimize: A Method for Sampling from Posterior Distributions in Nonlinear Inverse Problems

    No full text
    High-dimensional inverse problems present a challenge for Markov chain Monte Carlo (MCMC)-type sampling schemes. Typically, they rely on finding an efficient proposal distribution, which can be difficult for large-scale problems, even with adaptive approaches. Moreover, the autocorrelations of the samples typically increase with dimension, which leads to the need for long sample chains. We present an alternative method for sampling from posterior distributions in nonlinear inverse problems, when the measurement error and prior are both Gaussian. The approach computes a candidate sample by solving a stochastic optimization problem. In the linear case, these samples are directly from the posterior density, but this is not so in the nonlinear case. We derive the form of the sample density in the nonlinear case, and then show how to use it within both a Metropolis--Hastings and importance sampling framework to obtain samples from the posterior distribution of the parameters. We demonstrate, with various small- and medium-scale problems, that randomize-then-optimize can be efficient compared to standard adaptive MCMC algorithms
    corecore