3,946 research outputs found
A Dynamic Bi-orthogonal Field Equation Approach for Efficient Bayesian Calibration of Large-Scale Systems
This paper proposes a novel computationally efficient dynamic
bi-orthogonality based approach for calibration of a computer simulator with
high dimensional parametric and model structure uncertainty. The proposed
method is based on a decomposition of the solution into mean and a random field
using a generic Karhunnen-Loeve expansion. The random field is represented as a
convolution of separable Hilbert spaces in stochastic and spacial dimensions
that are spectrally represented using respective orthogonal bases. In
particular, the present paper investigates generalized polynomial chaos bases
for stochastic dimension and eigenfunction bases for spacial dimension. Dynamic
orthogonality is used to derive closed form equations for the time evolution of
mean, spacial and the stochastic fields. The resultant system of equations
consists of a partial differential equation (PDE) that define dynamic evolution
of the mean, a set of PDEs to define the time evolution of eigenfunction bases,
while a set of ordinary differential equations (ODEs) define dynamics of the
stochastic field. This system of dynamic evolution equations efficiently
propagates the prior parametric uncertainty to the system response. The
resulting bi-orthogonal expansion of the system response is used to reformulate
the Bayesian inference for efficient exploration of the posterior distribution.
Efficacy of the proposed method is investigated for calibration of a 2D
transient diffusion simulator with uncertain source location and diffusivity.
Computational efficiency of the method is demonstrated against a Monte Carlo
method and a generalized polynomial chaos approach
Generalized hybrid iterative methods for large-scale Bayesian inverse problems
We develop a generalized hybrid iterative approach for computing solutions to
large-scale Bayesian inverse problems. We consider a hybrid algorithm based on
the generalized Golub-Kahan bidiagonalization for computing Tikhonov
regularized solutions to problems where explicit computation of the square root
and inverse of the covariance kernel for the prior covariance matrix is not
feasible. This is useful for large-scale problems where covariance kernels are
defined on irregular grids or are only available via matrix-vector
multiplication, e.g., those from the Mat\'{e}rn class. We show that iterates
are equivalent to LSQR iterates applied to a directly regularized Tikhonov
problem, after a transformation of variables, and we provide connections to a
generalized singular value decomposition filtered solution. Our approach shares
many benefits of standard hybrid methods such as avoiding semi-convergence and
automatically estimating the regularization parameter. Numerical examples from
image processing demonstrate the effectiveness of the described approaches
Deep Learning for Classifying and Characterizing Atmospheric Ducting within the Maritime Setting
Real-time characterization of refractivity within the marine atmospheric
boundary layer can provide valuable information that can potentially be used to
mitigate the effects of atmospheric ducting on radar performance. Many duct
characterization models are successful at predicting parameters from a specific
refractivity profile associated with a given type of duct; however, the ability
to classify, and then subsequently characterize, various duct types is an
important step towards a more comprehensive prediction model. We introduce a
two-step approach using deep learning to differentiate sparsely sampled
propagation factor measurements collected under evaporation ducting conditions
with those collected under surface-based duct conditions in order to
subsequently estimate the appropriate refractivity parameters based on that
differentiation. We show that this approach is not only accurate, but also
efficient; thus providing a suitable method for real-time applications.Comment: 13 pages, 3 figure
Bayesian identification of discontinuous fields with an ensemble-based variable separation multiscale method
This work presents a multiscale model reduction approach to discontinuous
fields identification problems in the framework of Bayesian inference. An
ensemble-based variable separation (VS) method is proposed to approximate
multiscale basis functions used to build a coarse model. The
variable-separation expression is constructed for stochastic multiscale basis
functions based on the random field, which is treated Gauss process as prior
information. To this end, multiple local inhomogeneous Dirichlet boundary
condition problems are required to be solved, and the ensemble-based method is
used to obtain variable separation forms for the corresponding local functions.
The local functions share the same interpolate rule for different physical
basis functions in each coarse block. This approach significantly improves the
efficiency of computation. We obtain the variable separation expression of
multiscale basis functions, which can be used to the models with different
boundary conditions and source terms, once the expression constructed. The
proposed method is applied to discontinuous field identification problems where
the hybrid of total variation and Gaussian (TG) densities are imposed as the
penalty. We give a convergence analysis of the approximate posterior to the
reference one with respect to the Kullback-Leibler (KL) divergence under the
hybrid prior. The proposed method is applied to identify discontinuous
structures in permeability fields. Two patterns of discontinuous structures are
considered in numerical examples: separated blocks and nested blocks
Unifying Message Passing Algorithms Under the Framework of Constrained Bethe Free Energy Minimization
Variational message passing (VMP), belief propagation (BP) and expectation
propagation (EP) have found their wide applications in complex statistical
signal processing problems. In addition to viewing them as a class of
algorithms operating on graphical models, this paper unifies them under an
optimization framework, namely, Bethe free energy minimization with differently
and appropriately imposed constraints. This new perspective in terms of
constraint manipulation can offer additional insights on the connection between
different message passing algorithms and is valid for a generic statistical
model. It also founds a theoretical framework to systematically derive message
passing variants. Taking the sparse signal recovery (SSR) problem as an
example, a low-complexity EP variant can be obtained by simple constraint
reformulation, delivering better estimation performance with lower complexity
than the standard EP algorithm. Furthermore, we can resort to the framework for
the systematic derivation of hybrid message passing for complex inference
tasks. Notably, a hybrid message passing algorithm is exemplarily derived for
joint SSR and statistical model learning with near-optimal inference
performance and scalable complexity
Sparse Variational Bayesian Approximations for Nonlinear Inverse Problems: applications in nonlinear elastography
This paper presents an efficient Bayesian framework for solving nonlinear,
high-dimensional model calibration problems. It is based on a Variational
Bayesian formulation that aims at approximating the exact posterior by means of
solving an optimization problem over an appropriately selected family of
distributions. The goal is two-fold. Firstly, to find lower-dimensional
representations of the unknown parameter vector that capture as much as
possible of the associated posterior density, and secondly to enable the
computation of the approximate posterior density with as few forward calls as
possible. We discuss how these objectives can be achieved by using a fully
Bayesian argumentation and employing the marginal likelihood or evidence as the
ultimate model validation metric for any proposed dimensionality reduction. We
demonstrate the performance of the proposed methodology for problems in
nonlinear elastography where the identification of the mechanical properties of
biological materials can inform non-invasive, medical diagnosis. An Importance
Sampling scheme is finally employed in order to validate the results and assess
the efficacy of the approximations provided
Using flow models with sensitivities to study cost efficient monitoring programs of co2 storage sites
A key part of planning CO2 storage sites is to devise a monitoring strategy.
The aim of this strategy is to fulfill the requirements of legislations and
lower cost of the operation by avoiding operational problems. If CCS is going
to be a widespread technology to deliver energy without CO2 emissions,
cost-efficient monitoring programs will be a key to reduce the storage costs. A
simulation framework, previously used to estimate flow parameters at Sleipner
Layer 9 [1], is here extended and employed to identify how the number of
measurements can be reduced without significantly reducing the obtained
information. The main part of the methodology is based on well-proven, stable
and robust, simulation technology together with adjoint-based sensitivities and
data mining techniques using singular value decomposition (SVD). In particular
we combine the simulation framework with time-dependent (seismic) measurements
of the migrating plume. We also study how uplift data and gravitational data
give complementary information.
We apply this methodology to the Sleipner project, which provides the most
extensive data for CO2 storage to date. For this study we utilize a
vertical-equilibrium (VE) flow model for computational efficiency as
implemented in the open-source software MRST-co2lab.
However, our methodology for deriving efficient monitoring schemes is not
restricted to VE-type flow models, and at the end, we discuss how the
methodology can be used in the context of full 3D simulations
A numerical method for efficient 3D inversions using Richards equation
Fluid flow in the vadose zone is governed by Richards equation; it is
parameterized by hydraulic conductivity, which is a nonlinear function of
pressure head. Investigations in the vadose zone typically require
characterizing distributed hydraulic properties. Saturation or pressure head
data may include direct measurements made from boreholes. Increasingly, proxy
measurements from hydrogeophysics are being used to supply more spatially and
temporally dense data sets. Inferring hydraulic parameters from such datasets
requires the ability to efficiently solve and deterministically optimize the
nonlinear time domain Richards equation. This is particularly important as the
number of parameters to be estimated in a vadose zone inversion continues to
grow. In this paper, we describe an efficient technique to invert for
distributed hydraulic properties in 1D, 2D, and 3D. Our algorithm does not
store the Jacobian, but rather computes the product with a vector, which allows
the size of the inversion problem to become much larger than methods such as
finite difference or automatic differentiation; which are constrained by
computation and memory, respectively. We show our algorithm in practice for a
3D inversion of saturated hydraulic conductivity using saturation data through
time. The code to run our examples is open source and the algorithm presented
allows this inversion process to run on modest computational resources
Gaussian processes with built-in dimensionality reduction: Applications in high-dimensional uncertainty propagation
The prohibitive cost of performing Uncertainty Quantification (UQ) tasks with
a very large number of input parameters can be addressed, if the response
exhibits some special structure that can be discovered and exploited. Several
physical responses exhibit a special structure known as an active subspace
(AS), a linear manifold of the stochastic space characterized by maximal
response variation. The idea is that one should first identify this low
dimensional manifold, project the high-dimensional input onto it, and then link
the projection to the output. In this work, we develop a probabilistic version
of AS which is gradient-free and robust to observational noise. Our approach
relies on a novel Gaussian process regression with built-in dimensionality
reduction with the AS represented as an orthogonal projection matrix that
serves as yet another covariance function hyper-parameter to be estimated from
the data. To train the model, we design a two-step maximum likelihood
optimization procedure that ensures the orthogonality of the projection matrix
by exploiting recent results on the Stiefel manifold. The additional benefit of
our probabilistic formulation is that it allows us to select the dimensionality
of the AS via the Bayesian information criterion. We validate our approach by
showing that it can discover the right AS in synthetic examples without
gradient information using both noiseless and noisy observations. We
demonstrate that our method is able to discover the same AS as the classical
approach in a challenging one-hundred-dimensional problem involving an elliptic
stochastic partial differential equation with random conductivity. Finally, we
use our approach to study the effect of geometric and material uncertainties in
the propagation of solitary waves in a one-dimensional granular system.Comment: 37 pages, 20 figure
Regularizing Solutions to the MEG Inverse Problem Using Space-Time Separable Covariance Functions
In magnetoencephalography (MEG) the conventional approach to source
reconstruction is to solve the underdetermined inverse problem independently
over time and space. Here we present how the conventional approach can be
extended by regularizing the solution in space and time by a Gaussian process
(Gaussian random field) model. Assuming a separable covariance function in
space and time, the computational complexity of the proposed model becomes
(without any further assumptions or restrictions) , where is the number of time steps, is the number of sources,
and is the number of sensors. We apply the method to both simulated and
empirical data, and demonstrate the efficiency and generality of our Bayesian
source reconstruction approach which subsumes various classical approaches in
the literature.Comment: 25 pages, 7 figure
- …