912 research outputs found
Exploiting network topology for large-scale inference of nonlinear reaction models
The development of chemical reaction models aids understanding and prediction
in areas ranging from biology to electrochemistry and combustion. A systematic
approach to building reaction network models uses observational data not only
to estimate unknown parameters, but also to learn model structure. Bayesian
inference provides a natural approach to this data-driven construction of
models. Yet traditional Bayesian model inference methodologies that numerically
evaluate the evidence for each model are often infeasible for nonlinear
reaction network inference, as the number of plausible models can be
combinatorially large. Alternative approaches based on model-space sampling can
enable large-scale network inference, but their realization presents many
challenges. In this paper, we present new computational methods that make
large-scale nonlinear network inference tractable. First, we exploit the
topology of networks describing potential interactions among chemical species
to design improved "between-model" proposals for reversible-jump Markov chain
Monte Carlo. Second, we introduce a sensitivity-based determination of move
types which, when combined with network-aware proposals, yields significant
additional gains in sampling performance. These algorithms are demonstrated on
inference problems drawn from systems biology, with nonlinear differential
equation models of species interactions
MALA-within-Gibbs samplers for high-dimensional distributions with sparse conditional structure
Markov chain Monte Carlo (MCMC) samplers are numerical methods for drawing samples from a given target probability distribution. We discuss one particular MCMC sampler, the MALA-within-Gibbs sampler, from the theoretical and practical perspectives. We first show that the acceptance ratio and step size of this sampler are independent of the overall problem dimension when (i) the target distribution has sparse conditional structure, and (ii) this structure is reflected in the partial updating strategy of MALA-within-Gibbs. If, in addition, the target density is blockwise log-concave, then the sampler's convergence rate is independent of dimension. From a practical perspective, we expect that MALA-within-Gibbs is useful for solving high-dimensional Bayesian inference problems where the posterior exhibits sparse conditional structure at least approximately. In this context, a partitioning of the state that correctly reflects the sparse conditional structure must be found, and we illustrate this process in two numerical examples. We also discuss trade-offs between the block size used for partial updating and computational requirements that may increase with the number of blocks
Efficient Localization of Discontinuities in Complex Computational Simulations
Surrogate models for computational simulations are input-output
approximations that allow computationally intensive analyses, such as
uncertainty propagation and inference, to be performed efficiently. When a
simulation output does not depend smoothly on its inputs, the error and
convergence rate of many approximation methods deteriorate substantially. This
paper details a method for efficiently localizing discontinuities in the input
parameter domain, so that the model output can be approximated as a piecewise
smooth function. The approach comprises an initialization phase, which uses
polynomial annihilation to assign function values to different regions and thus
seed an automated labeling procedure, followed by a refinement phase that
adaptively updates a kernel support vector machine representation of the
separating surface via active learning. The overall approach avoids structured
grids and exploits any available simplicity in the geometry of the separating
surface, thus reducing the number of model evaluations required to localize the
discontinuity. The method is illustrated on examples of up to eleven
dimensions, including algebraic models and ODE/PDE systems, and demonstrates
improved scaling and efficiency over other discontinuity localization
approaches
Data-Driven Model Reduction for the Bayesian Solution of Inverse Problems
One of the major challenges in the Bayesian solution of inverse problems
governed by partial differential equations (PDEs) is the computational cost of
repeatedly evaluating numerical PDE models, as required by Markov chain Monte
Carlo (MCMC) methods for posterior sampling. This paper proposes a data-driven
projection-based model reduction technique to reduce this computational cost.
The proposed technique has two distinctive features. First, the model reduction
strategy is tailored to inverse problems: the snapshots used to construct the
reduced-order model are computed adaptively from the posterior distribution.
Posterior exploration and model reduction are thus pursued simultaneously.
Second, to avoid repeated evaluations of the full-scale numerical model as in a
standard MCMC method, we couple the full-scale model and the reduced-order
model together in the MCMC algorithm. This maintains accurate inference while
reducing its overall computational cost. In numerical experiments considering
steady-state flow in a porous medium, the data-driven reduced-order model
achieves better accuracy than a reduced-order model constructed using the
classical approach. It also improves posterior sampling efficiency by several
orders of magnitude compared to a standard MCMC method
Localization for MCMC: sampling high-dimensional posterior distributions with local structure
We investigate how ideas from covariance localization in numerical weather
prediction can be used in Markov chain Monte Carlo (MCMC) sampling of
high-dimensional posterior distributions arising in Bayesian inverse problems.
To localize an inverse problem is to enforce an anticipated "local" structure
by (i) neglecting small off-diagonal elements of the prior precision and
covariance matrices; and (ii) restricting the influence of observations to
their neighborhood. For linear problems we can specify the conditions under
which posterior moments of the localized problem are close to those of the
original problem. We explain physical interpretations of our assumptions about
local structure and discuss the notion of high dimensionality in local
problems, which is different from the usual notion of high dimensionality in
function space MCMC. The Gibbs sampler is a natural choice of MCMC algorithm
for localized inverse problems and we demonstrate that its convergence rate is
independent of dimension for localized linear problems. Nonlinear problems can
also be tackled efficiently by localization and, as a simple illustration of
these ideas, we present a localized Metropolis-within-Gibbs sampler. Several
linear and nonlinear numerical examples illustrate localization in the context
of MCMC samplers for inverse problems.Comment: 33 pages, 5 figure
A continuous analogue of the tensor-train decomposition
We develop new approximation algorithms and data structures for representing
and computing with multivariate functions using the functional tensor-train
(FT), a continuous extension of the tensor-train (TT) decomposition. The FT
represents functions using a tensor-train ansatz by replacing the
three-dimensional TT cores with univariate matrix-valued functions. The main
contribution of this paper is a framework to compute the FT that employs
adaptive approximations of univariate fibers, and that is not tied to any
tensorized discretization. The algorithm can be coupled with any univariate
linear or nonlinear approximation procedure. We demonstrate that this approach
can generate multivariate function approximations that are several orders of
magnitude more accurate, for the same cost, than those based on the
conventional approach of compressing the coefficient tensor of a tensor-product
basis. Our approach is in the spirit of other continuous computation packages
such as Chebfun, and yields an algorithm which requires the computation of
"continuous" matrix factorizations such as the LU and QR decompositions of
vector-valued functions. To support these developments, we describe continuous
versions of an approximate maximum-volume cross approximation algorithm and of
a rounding algorithm that re-approximates an FT by one of lower ranks. We
demonstrate that our technique improves accuracy and robustness, compared to TT
and quantics-TT approaches with fixed parameterizations, of high-dimensional
integration, differentiation, and approximation of functions with local
features such as discontinuities and other nonlinearities
- …