1,731 research outputs found
MALA-within-Gibbs samplers for high-dimensional distributions with sparse conditional structure
Markov chain Monte Carlo (MCMC) samplers are numerical methods for drawing samples from a given target probability distribution. We discuss one particular MCMC sampler, the MALA-within-Gibbs sampler, from the theoretical and practical perspectives. We first show that the acceptance ratio and step size of this sampler are independent of the overall problem dimension when (i) the target distribution has sparse conditional structure, and (ii) this structure is reflected in the partial updating strategy of MALA-within-Gibbs. If, in addition, the target density is blockwise log-concave, then the sampler's convergence rate is independent of dimension. From a practical perspective, we expect that MALA-within-Gibbs is useful for solving high-dimensional Bayesian inference problems where the posterior exhibits sparse conditional structure at least approximately. In this context, a partitioning of the state that correctly reflects the sparse conditional structure must be found, and we illustrate this process in two numerical examples. We also discuss trade-offs between the block size used for partial updating and computational requirements that may increase with the number of blocks
Exploiting network topology for large-scale inference of nonlinear reaction models
The development of chemical reaction models aids understanding and prediction
in areas ranging from biology to electrochemistry and combustion. A systematic
approach to building reaction network models uses observational data not only
to estimate unknown parameters, but also to learn model structure. Bayesian
inference provides a natural approach to this data-driven construction of
models. Yet traditional Bayesian model inference methodologies that numerically
evaluate the evidence for each model are often infeasible for nonlinear
reaction network inference, as the number of plausible models can be
combinatorially large. Alternative approaches based on model-space sampling can
enable large-scale network inference, but their realization presents many
challenges. In this paper, we present new computational methods that make
large-scale nonlinear network inference tractable. First, we exploit the
topology of networks describing potential interactions among chemical species
to design improved "between-model" proposals for reversible-jump Markov chain
Monte Carlo. Second, we introduce a sensitivity-based determination of move
types which, when combined with network-aware proposals, yields significant
additional gains in sampling performance. These algorithms are demonstrated on
inference problems drawn from systems biology, with nonlinear differential
equation models of species interactions
Spectral tensor-train decomposition
The accurate approximation of high-dimensional functions is an essential task
in uncertainty quantification and many other fields. We propose a new function
approximation scheme based on a spectral extension of the tensor-train (TT)
decomposition. We first define a functional version of the TT decomposition and
analyze its properties. We obtain results on the convergence of the
decomposition, revealing links between the regularity of the function, the
dimension of the input space, and the TT ranks. We also show that the
regularity of the target function is preserved by the univariate functions
(i.e., the "cores") comprising the functional TT decomposition. This result
motivates an approximation scheme employing polynomial approximations of the
cores. For functions with appropriate regularity, the resulting
\textit{spectral tensor-train decomposition} combines the favorable
dimension-scaling of the TT decomposition with the spectral convergence rate of
polynomial approximations, yielding efficient and accurate surrogates for
high-dimensional functions. To construct these decompositions, we use the
sampling algorithm \texttt{TT-DMRG-cross} to obtain the TT decomposition of
tensors resulting from suitable discretizations of the target function. We
assess the performance of the method on a range of numerical examples: a
modifed set of Genz functions with dimension up to , and functions with
mixed Fourier modes or with local features. We observe significant improvements
in performance over an anisotropic adaptive Smolyak approach. The method is
also used to approximate the solution of an elliptic PDE with random input
data. The open source software and examples presented in this work are
available online.Comment: 33 pages, 19 figure
Efficient Localization of Discontinuities in Complex Computational Simulations
Surrogate models for computational simulations are input-output
approximations that allow computationally intensive analyses, such as
uncertainty propagation and inference, to be performed efficiently. When a
simulation output does not depend smoothly on its inputs, the error and
convergence rate of many approximation methods deteriorate substantially. This
paper details a method for efficiently localizing discontinuities in the input
parameter domain, so that the model output can be approximated as a piecewise
smooth function. The approach comprises an initialization phase, which uses
polynomial annihilation to assign function values to different regions and thus
seed an automated labeling procedure, followed by a refinement phase that
adaptively updates a kernel support vector machine representation of the
separating surface via active learning. The overall approach avoids structured
grids and exploits any available simplicity in the geometry of the separating
surface, thus reducing the number of model evaluations required to localize the
discontinuity. The method is illustrated on examples of up to eleven
dimensions, including algebraic models and ODE/PDE systems, and demonstrates
improved scaling and efficiency over other discontinuity localization
approaches
Accelerating Asymptotically Exact MCMC for Computationally Intensive Models via Local Approximations
We construct a new framework for accelerating Markov chain Monte Carlo in
posterior sampling problems where standard methods are limited by the
computational cost of the likelihood, or of numerical models embedded therein.
Our approach introduces local approximations of these models into the
Metropolis-Hastings kernel, borrowing ideas from deterministic approximation
theory, optimization, and experimental design. Previous efforts at integrating
approximate models into inference typically sacrifice either the sampler's
exactness or efficiency; our work seeks to address these limitations by
exploiting useful convergence characteristics of local approximations. We prove
the ergodicity of our approximate Markov chain, showing that it samples
asymptotically from the \emph{exact} posterior distribution of interest. We
describe variations of the algorithm that employ either local polynomial
approximations or local Gaussian process regressors. Our theoretical results
reinforce the key observation underlying this paper: when the likelihood has
some \emph{local} regularity, the number of model evaluations per MCMC step can
be greatly reduced without biasing the Monte Carlo average. Numerical
experiments demonstrate multiple order-of-magnitude reductions in the number of
forward model evaluations used in representative ODE and PDE inference
problems, with both synthetic and real data.Comment: A major update of the theory and example
A continuous analogue of the tensor-train decomposition
We develop new approximation algorithms and data structures for representing
and computing with multivariate functions using the functional tensor-train
(FT), a continuous extension of the tensor-train (TT) decomposition. The FT
represents functions using a tensor-train ansatz by replacing the
three-dimensional TT cores with univariate matrix-valued functions. The main
contribution of this paper is a framework to compute the FT that employs
adaptive approximations of univariate fibers, and that is not tied to any
tensorized discretization. The algorithm can be coupled with any univariate
linear or nonlinear approximation procedure. We demonstrate that this approach
can generate multivariate function approximations that are several orders of
magnitude more accurate, for the same cost, than those based on the
conventional approach of compressing the coefficient tensor of a tensor-product
basis. Our approach is in the spirit of other continuous computation packages
such as Chebfun, and yields an algorithm which requires the computation of
"continuous" matrix factorizations such as the LU and QR decompositions of
vector-valued functions. To support these developments, we describe continuous
versions of an approximate maximum-volume cross approximation algorithm and of
a rounding algorithm that re-approximates an FT by one of lower ranks. We
demonstrate that our technique improves accuracy and robustness, compared to TT
and quantics-TT approaches with fixed parameterizations, of high-dimensional
integration, differentiation, and approximation of functions with local
features such as discontinuities and other nonlinearities
- …
