7,701 research outputs found
Analysis of the Gibbs sampler for hierarchical inverse problems
Many inverse problems arising in applications come from continuum models
where the unknown parameter is a field. In practice the unknown field is
discretized resulting in a problem in , with an understanding
that refining the discretization, that is increasing , will often be
desirable. In the context of Bayesian inversion this situation suggests the
importance of two issues: (i) defining hyper-parameters in such a way that they
are interpretable in the continuum limit and so that their
values may be compared between different discretization levels; (ii)
understanding the efficiency of algorithms for probing the posterior
distribution, as a function of large Here we address these two issues in
the context of linear inverse problems subject to additive Gaussian noise
within a hierarchical modelling framework based on a Gaussian prior for the
unknown field and an inverse-gamma prior for a hyper-parameter, namely the
amplitude of the prior variance. The structure of the model is such that the
Gibbs sampler can be easily implemented for probing the posterior distribution.
Subscribing to the dogma that one should think infinite-dimensionally before
implementing in finite dimensions, we present function space intuition and
provide rigorous theory showing that as increases, the component of the
Gibbs sampler for sampling the amplitude of the prior variance becomes
increasingly slower. We discuss a reparametrization of the prior variance that
is robust with respect to the increase in dimension; we give numerical
experiments which exhibit that our reparametrization prevents the slowing down.
Our intuition on the behaviour of the prior hyper-parameter, with and without
reparametrization, is sufficiently general to include a broad class of
nonlinear inverse problems as well as other families of hyper-priors.Comment: to appear, SIAM/ASA Journal on Uncertainty Quantificatio
Temporal breakdown and Borel resummation in the complex Langevin method
We reexamine the Parisi-Klauder conjecture for complex e^{i\theta/2} \phi^4
measures with a Wick rotation angle 0 <= \theta/2 < \pi/2 interpolating between
Euclidean and Lorentzian signature. Our main result is that the asymptotics for
short stochastic times t encapsulates information also about the equilibrium
aspects. The moments evaluated with the complex measure and with the real
measure defined by the stochastic Langevin equation have the same t -> 0
asymptotic expansion which is shown to be Borel summable. The Borel transform
correctly reproduces the time dependent moments of the complex measure for all
t, including their t -> infinity equilibrium values. On the other hand the
results of a direct numerical simulation of the Langevin moments are found to
disagree from the `correct' result for t larger than a finite t_c. The
breakdown time t_c increases powerlike for decreasing strength of the noise's
imaginary part but cannot be excluded to be finite for purely real noise. To
ascertain the discrepancy we also compute the real equilibrium distribution for
complex noise explicitly and verify that its moments differ from those obtained
with the complex measure.Comment: title changed, results on parameter dependence of t_c added,
exposition improved. 39 pages, 7 figure
Towards a unified lattice kinetic scheme for relativistic hydrodynamics
We present a systematic derivation of relativistic lattice kinetic equations
for finite-mass particles, reaching close to the zero-mass ultra-relativistic
regime treated in the previous literature. Starting from an expansion of the
Maxwell-Juettner distribution on orthogonal polynomials, we perform a
Gauss-type quadrature procedure and discretize the relativistic Boltzmann
equation on space-filling Cartesian lattices. The model is validated through
numerical comparison with standard benchmark tests and solvers in relativistic
fluid dynamics such as Boltzmann approach multiparton scattering (BAMPS) and
previous relativistic lattice Boltzmann models. This work provides a
significant step towards the formulation of a unified relativistic lattice
kinetic scheme, covering both massive and near-massless particles regimes
Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or
implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k))
floating-point operations (flops) in contrast to O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data
- …