43 research outputs found

    Hyper-differential sensitivity analysis with respect to model discrepancy: Mathematics and computation

    Full text link
    Model discrepancy, defined as the difference between model predictions and reality, is ubiquitous in computational models for physical systems. It is common to derive partial differential equations (PDEs) from first principles physics, but make simplifying assumptions to produce tractable expressions for the governing equations or closure models. These PDEs are then used for analysis and design to achieve desirable performance. For instance, the end goal may be to solve a PDE-constrained optimization (PDECO) problem. This article considers the sensitivity of PDECO problems with respect to model discrepancy. We introduce a general representation of the discrepancy and apply post-optimality sensitivity analysis to derive an expression for the sensitivity of the optimal solution with respect to the discrepancy. An efficient algorithm is presented which combines the PDE discretization, post-optimality sensitivity operator, adjoint-based derivatives, and a randomized generalized singular value decomposition to enable scalable computation. Kronecker product structure in the underlying linear algebra and corresponding infrastructure in PDECO is exploited to yield a general purpose algorithm which is computationally efficient and portable across a range of applications. Known physics and problem specific characteristics of discrepancy are imposed through user specified weighting matrices. We demonstrate our proposed framework on two nonlinear PDECO problems to highlight its computational efficiency and rich insight

    Hyper-differential sensitivity analysis with respect to model discrepancy: Optimal solution updating

    Full text link
    A common goal throughout science and engineering is to solve optimization problems constrained by computational models. However, in many cases a high-fidelity numerical emulation of systems cannot be optimized due to code complexity and computational costs which prohibit the use of intrusive and many query algorithms. Rather, lower-fidelity models are constructed to enable intrusive algorithms for large-scale optimization. As a result of the discrepancy between high and low-fidelity models, optimal solutions determined using low-fidelity models are frequently far from true optimality. In this article we introduce a novel approach that uses post-optimality sensitivities with respect to model discrepancy to update the optimization solution. Limited high-fidelity data is used to calibrate the model discrepancy in a Bayesian framework which in turn is propagated through post-optimality sensitivities of the low-fidelity optimization problem. Our formulation exploits structure in the post-optimality sensitivity operator to achieve computational scalability. Numerical results demonstrate how an optimal solution computed using a low-fidelity model may be significantly improved with limited evaluations of a high-fidelity model

    Enabling and interpreting hyper-differential sensitivity analysis for Bayesian inverse problems

    Full text link
    Inverse problems constrained by partial differential equations (PDEs) play a critical role in model development and calibration. In many applications, there are multiple uncertain parameters in a model which must be estimated. Although the Bayesian formulation is attractive for such problems, computational cost and high dimensionality frequently prohibit a thorough exploration of the parametric uncertainty. A common approach is to reduce the dimension by fixing some parameters (which we will call auxiliary parameters) to a best estimate and using techniques from PDE-constrained optimization to approximate properties of the Bayesian posterior distribution. For instance, the maximum a posteriori probability (MAP) and the Laplace approximation of the posterior covariance can be computed. In this article, we propose using hyper-differential sensitivity analysis (HDSA) to assess the sensitivity of the MAP point to changes in the auxiliary parameters. We establish an interpretation of HDSA as correlations in the posterior distribution. Foundational assumptions for HDSA require satisfaction of the optimality conditions which are not always feasible or appropriate as a result of ill-posedness in the inverse problem. We introduce novel theoretical and computational approaches to justify and enable HDSA for ill-posed inverse problems by projecting the sensitivities on likelihood informed subspaces and defining a posteriori updates. Our proposed framework is demonstrated on a nonlinear multi-physics inverse problem motivated by estimation of spatially heterogenous material properties in the presence of spatially distributed parametric modeling uncertainties.Comment: 31 page

    Estimating and using information in inverse problems

    Full text link
    In inverse problems, one attempts to infer spatially variable functions from indirect measurements of a system. To practitioners of inverse problems, the concept of "information" is familiar when discussing key questions such as which parts of the function can be inferred accurately and which cannot. For example, it is generally understood that we can identify system parameters accurately only close to detectors, or along ray paths between sources and detectors, because we have "the most information" for these places. Although referenced in many publications, the "information" that is invoked in such contexts is not a well understood and clearly defined quantity. Herein, we present a definition of information density that is based on the variance of coefficients as derived from a Bayesian reformulation of the inverse problem. We then discuss three areas in which this information density can be useful in practical algorithms for the solution of inverse problems, and illustrate the usefulness in one of these areas -- how to choose the discretization mesh for the function to be reconstructed -- using numerical experiments

    A Variational Finite Element Method for Source Inversion for Convective-Diffusive Transport

    Get PDF
    We consider the inverse problem of determining an arbitrary source in a time-dependent convective-diffusive transport equation, given a velocity field and pointwise measurements of the concentration. Applications that give rise to such problems include determination of groundwater or airborne pollutant sources from measurements of concentrations, and identification of sources of chemical or biological attacks. To address ill-posedness of the problem, we employ Tikhonov and total variation regularization. We present a variational formulation of the first order optimality system, which includes the initial-boundary value state problem, the final-boundary value adjoint problem, and the space-time boundary value source problem. We discretize in the space-time volume using Galerkin finite elements. Several examples demonstrate the influence of the density of the sensor array, the effectiveness of total variation regularization for discontinuous sources, the invertibility of the source as the transport becomes increasingly convection-dominated, the ability of the space-time inversion formulation to track moving sources, and the optimal convergence rate of the finite element approximation

    Inversion of Airborne Contaminants in a Regional Model

    Get PDF
    We are interested in a DDDAS problem of localization of airborne contaminant releases in regional atmospheric transport models from sparse observations. Given measurements of the contaminant over an observation window at a small number of points in space, and a velocity field as predicted for example by a mesoscopic weather model, we seek an estimate of the state of the contaminant at the begining of the observation interval that minimizes the least squares misfit between measured and predicted contaminant field, subject to the convection-diffusion equation for the contaminant. Once the initial conditions are estimated by solution of the inverse problem, we issue predictions of the evolution of the contaminant, the observation window is advanced in time, and the process repeated to issue a new prediction, in the style of 4D-Var. We design an appropriate numerical strategy that exploits the spectral structure of the inverse operator, and leads to efficient and accurate resolution of the inverse problem. Numerical experiments verify that high resolution inversion can be carried out rapidly for a well-resolved terrain model of the greater Los Angeles area
    corecore