24 research outputs found

    Probabilistic Linear Solvers: A Unifying View

    Full text link
    Several recent works have developed a new, probabilistic interpretation for numerical algorithms solving linear systems in which the solution is inferred in a Bayesian framework, either directly or by inferring the unknown action of the matrix inverse. These approaches have typically focused on replicating the behavior of the conjugate gradient method as a prototypical iterative method. In this work surprisingly general conditions for equivalence of these disparate methods are presented. We also describe connections between probabilistic linear solvers and projection methods for linear systems, providing a probabilistic interpretation of a far more general class of iterative methods. In particular, this provides such an interpretation of the generalised minimum residual method. A probabilistic view of preconditioning is also introduced. These developments unify the literature on probabilistic linear solvers, and provide foundational connections to the literature on iterative solvers for linear systems

    Probabilistic Gradients for Fast Calibration of Differential Equation Models

    Get PDF
    Calibration of large-scale differential equation models to observational or experimental data is a widespread challenge throughout applied sciences and engineering. A crucial bottleneck in state-of-the art calibration methods is the calculation of local sensitivities, i.e. derivatives of the loss function with respect to the estimated parameters, which often necessitates several numerical solves of the underlying system of partial or ordinary differential equations. In this paper we present a new probabilistic approach to computing local sensitivities. The proposed method has several advantages over classical methods. Firstly, it operates within a constrained computational budget and provides a probabilistic quantification of uncertainty incurred in the sensitivities from this constraint. Secondly, information from previous sensitivity estimates can be recycled in subsequent computations, reducing the overall computational effort for iterative gradient-based calibration methods. The methodology presented is applied to two challenging test problems and compared against classical methods

    Bayesian probabilistic numerical methods

    Get PDF
    The increasing complexity of computer models used to solve contemporary inference problems has been set against a decreasing rate of improvement in processor speed in recent years. As a result, in many of these problems numerical error is a challenge for practitioners. However, while there has been a recent push towards rigorous quantification of uncertainty in inference problems based upon computer models, numerical error is still largely required to be driven down to a level at which its impact on inferences is negligible. Probabilistic numerical methods have been proposed to alleviate this; these are a class of numerical methods that return probabilistic uncertainty quantification for their numerical error. The attraction of such methods is clear: if numerical error in the computer model and uncertainty in an inference problem are quantified in a unified framework then careful tuning of numerical methods to mitigate the impact of numerical error on inferences could become unnecessary. In this thesis we introduce the class of Bayesian probabilistic numerical methods, whose uncertainty has a strict and rigorous Bayesian interpretation. A number of examples of conjugate Bayesian probabilistic numerical methods are presented before we present analysis and algorithms for the general case, in which the posterior distribution does not posess a closed form. We conclude by studying how these methods can be rigorously composed to yield Bayesian pipelines of computation. Throughout we present applications of the developed methods to real-world inference problems, and indicate that the uncertainty quantification provided by these methods can be of significant practical use

    BayesCG As An Uncertainty Aware Version of CG

    Full text link
    The Bayesian Conjugate Gradient method (BayesCG) is a probabilistic generalization of the Conjugate Gradient method (CG) for solving linear systems with real symmetric positive definite coefficient matrices. We present a CG-based implementation of BayesCG with a structure-exploiting prior distribution. The BayesCG output consists of CG iterates and posterior covariances that can be propagated to subsequent computations. The covariances are low-rank and maintained in factored form. This allows easy generation of accurate samples to probe uncertainty in subsequent computations. Numerical experiments confirm the effectiveness of the posteriors and their low-rank approximations.Comment: 31 Pages including supplementary material (main paper is 22 pages, supplement is 9 pages). Computer codes are available at https://github.com/treid5/ProbNumCG_Sup

    Optimality Criteria for Probabilistic Numerical Methods

    Full text link
    It is well understood that Bayesian decision theory and average case analysis are essentially identical. However, if one is interested in performing uncertainty quantification for a numerical task, it can be argued that standard approaches from the decision-theoretic framework are neither appropriate nor sufficient. Instead, we consider a particular optimality criterion from Bayesian experimental design and study its implied optimal information in the numerical context. This information is demonstrated to differ, in general, from the information that would be used in an average-case-optimal numerical method. The explicit connection to Bayesian experimental design suggests several distinct regimes in which optimal probabilistic numerical methods can be developed.Comment: Prepared for the proceedings of the RICAM workshop on Multivariate Algorithms and Information-Based Complexity, November 201
    corecore