5 research outputs found

    Tensor train construction from tensor actions, with application to compression of large high order derivative tensors

    Full text link
    We present a method for converting tensors into tensor train format based on actions of the tensor as a vector-valued multilinear function. Existing methods for constructing tensor trains require access to "array entries" of the tensor and are therefore inefficient or computationally prohibitive if the tensor is accessible only through its action, especially for high order tensors. Our method permits efficient tensor train compression of large high order derivative tensors for nonlinear mappings that are implicitly defined through the solution of a system of equations. Array entries of these derivative tensors are not directly accessible, but actions of these tensors can be computed efficiently via a procedure that we discuss. Such tensors are often amenable to tensor train compression in theory, but until now no efficient algorithm existed to convert them into tensor train format. We demonstrate our method by compressing a Hilbert tensor of size 41×42×43×44×4541 \times 42 \times 43 \times 44 \times 45, and by forming high order (up to 5th5^\text{th} order derivatives/6th6^\text{th} order tensors) Taylor series surrogates of the noise-whitened parameter-to-output map for a stochastic partial differential equation with boundary output

    Projected Wasserstein gradient descent for high-dimensional Bayesian inference

    Full text link
    We propose a projected Wasserstein gradient descent method (pWGD) for high-dimensional Bayesian inference problems. The underlying density function of a particle system of WGD is approximated by kernel density estimation (KDE), which faces the long-standing curse of dimensionality. We overcome this challenge by exploiting the intrinsic low-rank structure in the difference between the posterior and prior distributions. The parameters are projected into a low-dimensional subspace to alleviate the approximation error of KDE in high dimensions. We formulate a projected Wasserstein gradient flow and analyze its convergence property under mild assumptions. Several numerical experiments illustrate the accuracy, convergence, and complexity scalability of pWGD with respect to parameter dimension, sample size, and processor cores

    A fast and scalable computational framework for goal-oriented linear Bayesian optimal experimental design: Application to optimal sensor placement

    Full text link
    Optimal experimental design (OED) is a principled framework for maximizing information gained from limited data in inverse problems. Unfortunately, conventional methods for OED are prohibitive when applied to expensive models with high-dimensional parameters, as we target here. We develop a fast and scalable computational framework for goal-oriented OED of large-scale Bayesian linear inverse problems that finds sensor locations to maximize the expected information gain (EIG) for a predicted quantity of interest. By employing low-rank approximations of appropriate operators, an online-offline decomposition, and a new swapping greedy algorithm, we are able to maximize EIG at a cost measured in model solutions that is independent of the problem dimensions. We demonstrate the efficiency, accuracy, and both data- and parameter-dimension independence of the proposed algorithm for a contaminant transport inverse problem with infinite-dimensional parameter field

    Optimal design of acoustic metamaterial cloaks under uncertainty

    Full text link
    In this work, we consider the problem of optimal design of an acoustic cloak under uncertainty and develop scalable approximation and optimization methods to solve this problem. The design variable is taken as an infinite-dimensional spatially-varying field that represents the material property, while an additive infinite-dimensional random field represents the variability of the material property or the manufacturing error. Discretization of this optimal design problem results in high-dimensional design variables and uncertain parameters. To solve this problem, we develop a computational approach based on a Taylor approximation and an approximate Newton method for optimization, which is based on a Hessian derived at the mean of the random field. We show our approach is scalable with respect to the dimension of both the design variables and uncertain parameters, in the sense that the necessary number of acoustic wave propagations is essentially independent of these dimensions, for numerical experiments with up to one million design variables and half a million uncertain parameters. We demonstrate that, using our computational approach, an optimal design of the acoustic cloak that is robust to material uncertainty is achieved in a tractable manner. The optimal design under uncertainty problem is posed and solved for the classical circular obstacle surrounded by a ring-shaped cloaking region, subjected to both a single-direction single-frequency incident wave and multiple-direction multiple-frequency incident waves. Finally, we apply the method to a deterministic large-scale optimal cloaking problem with complex geometry, to demonstrate that the approximate Newton method's Hessian computation is viable for large, complex problems

    Derivative-Informed Projected Neural Networks for High-Dimensional Parametric Maps Governed by PDEs

    Full text link
    Many-query problems, arising from uncertainty quantification, Bayesian inversion, Bayesian optimal experimental design, and optimization under uncertainty-require numerous evaluations of a parameter-to-output map. These evaluations become prohibitive if this parametric map is high-dimensional and involves expensive solution of partial differential equations (PDEs). To tackle this challenge, we propose to construct surrogates for high-dimensional PDE-governed parametric maps in the form of projected neural networks that parsimoniously capture the geometry and intrinsic low-dimensionality of these maps. Specifically, we compute Jacobians of these PDE-based maps, and project the high-dimensional parameters onto a low-dimensional derivative-informed active subspace; we also project the possibly high-dimensional outputs onto their principal subspace. This exploits the fact that many high-dimensional PDE-governed parametric maps can be well-approximated in low-dimensional parameter and output subspace. We use the projection basis vectors in the active subspace as well as the principal output subspace to construct the weights for the first and last layers of the neural network, respectively. This frees us to train the weights in only the low-dimensional layers of the neural network. The architecture of the resulting neural network captures to first order, the low-dimensional structure and geometry of the parametric map. We demonstrate that the proposed projected neural network achieves greater generalization accuracy than a full neural network, especially in the limited training data regime afforded by expensive PDE-based parametric maps. Moreover, we show that the number of degrees of freedom of the inner layers of the projected network is independent of the parameter and output dimensions, and high accuracy can be achieved with weight dimension independent of the discretization dimension
    corecore