190 research outputs found
Black box probabilistic numerics
Peer reviewe
A modern retrospective on probabilistic numerics
This article attempts to place the emergence of probabilistic numerics as a mathematical–statistical research field within its historical context and to explore how its gradual development can be related both to applications and to a modern formal treatment. We highlight in particular the parallel contributions of Sul′din and Larkin in the 1960s and how their pioneering early ideas have reached a degree of maturity in the intervening period, mediated by paradigms such as average-case analysis and information-based complexity. We provide a subjective assessment of the state of research in probabilistic numerics and highlight some difficulties to be addressed by future works
Bayesian Inference of Log Determinants
The log-determinant of a kernel matrix appears in a variety of machine
learning problems, ranging from determinantal point processes and generalized
Markov random fields, through to the training of Gaussian processes. Exact
calculation of this term is often intractable when the size of the kernel
matrix exceeds a few thousand. In the spirit of probabilistic numerics, we
reinterpret the problem of computing the log-determinant as a Bayesian
inference problem. In particular, we combine prior knowledge in the form of
bounds from matrix theory and evidence derived from stochastic trace estimation
to obtain probabilistic estimates for the log-determinant and its associated
uncertainty within a given computational budget. Beyond its novelty and
theoretic appeal, the performance of our proposal is competitive with
state-of-the-art approaches to approximating the log-determinant, while also
quantifying the uncertainty due to budget-constrained evidence.Comment: 12 pages, 3 figure
Convergence Rates of Gaussian ODE Filters
A recently-introduced class of probabilistic (uncertainty-aware) solvers for
ordinary differential equations (ODEs) applies Gaussian (Kalman) filtering to
initial value problems. These methods model the true solution and its first
derivatives \emph{a priori} as a Gauss--Markov process ,
which is then iteratively conditioned on information about . This
article establishes worst-case local convergence rates of order for a
wide range of versions of this Gaussian ODE filter, as well as global
convergence rates of order in the case of and an integrated Brownian
motion prior, and analyses how inaccurate information on coming from
approximate evaluations of affects these rates. Moreover, we show that, in
the globally convergent case, the posterior credible intervals are well
calibrated in the sense that they globally contract at the same rate as the
truncation error. We illustrate these theoretical results by numerical
experiments which might indicate their generalizability to .Comment: 26 pages, 5 figure
- …