190 research outputs found

    A modern retrospective on probabilistic numerics

    Get PDF
    This article attempts to place the emergence of probabilistic numerics as a mathematical–statistical research field within its historical context and to explore how its gradual development can be related both to applications and to a modern formal treatment. We highlight in particular the parallel contributions of Sul′din and Larkin in the 1960s and how their pioneering early ideas have reached a degree of maturity in the intervening period, mediated by paradigms such as average-case analysis and information-based complexity. We provide a subjective assessment of the state of research in probabilistic numerics and highlight some difficulties to be addressed by future works

    Bayesian Inference of Log Determinants

    Full text link
    The log-determinant of a kernel matrix appears in a variety of machine learning problems, ranging from determinantal point processes and generalized Markov random fields, through to the training of Gaussian processes. Exact calculation of this term is often intractable when the size of the kernel matrix exceeds a few thousand. In the spirit of probabilistic numerics, we reinterpret the problem of computing the log-determinant as a Bayesian inference problem. In particular, we combine prior knowledge in the form of bounds from matrix theory and evidence derived from stochastic trace estimation to obtain probabilistic estimates for the log-determinant and its associated uncertainty within a given computational budget. Beyond its novelty and theoretic appeal, the performance of our proposal is competitive with state-of-the-art approaches to approximating the log-determinant, while also quantifying the uncertainty due to budget-constrained evidence.Comment: 12 pages, 3 figure

    Convergence Rates of Gaussian ODE Filters

    Get PDF
    A recently-introduced class of probabilistic (uncertainty-aware) solvers for ordinary differential equations (ODEs) applies Gaussian (Kalman) filtering to initial value problems. These methods model the true solution xx and its first qq derivatives \emph{a priori} as a Gauss--Markov process X\boldsymbol{X}, which is then iteratively conditioned on information about x˙\dot{x}. This article establishes worst-case local convergence rates of order q+1q+1 for a wide range of versions of this Gaussian ODE filter, as well as global convergence rates of order qq in the case of q=1q=1 and an integrated Brownian motion prior, and analyses how inaccurate information on x˙\dot{x} coming from approximate evaluations of ff affects these rates. Moreover, we show that, in the globally convergent case, the posterior credible intervals are well calibrated in the sense that they globally contract at the same rate as the truncation error. We illustrate these theoretical results by numerical experiments which might indicate their generalizability to q∈{2,3,… }q \in \{2,3,\dots\}.Comment: 26 pages, 5 figure
    • …
    corecore