We present a new algorithm for automatically bounding the Taylor remainder
series. In the special case of a scalar function f:RβR, our algorithm takes as input a reference point x0β, trust region
[a,b], and integer kβ₯1, and returns an interval I such that f(x)ββi=0kβ1βi!1βf(i)(x0β)(xβx0β)iβI(xβx0β)k for
all xβ[a,b]. As in automatic differentiation, the function f is
provided to the algorithm in symbolic form, and must be composed of known
atomic functions.
At a high level, our algorithm has two steps. First, for a variety of
commonly-used elementary functions (e.g., exp, log), we use
recently-developed theory to derive sharp polynomial upper and lower bounds on
the Taylor remainder series. We then recursively combine the bounds for the
elementary functions using an interval arithmetic variant of Taylor-mode
automatic differentiation. Our algorithm can make efficient use of machine
learning hardware accelerators, and we provide an open source implementation in
JAX.
We then turn our attention to applications. Most notably, in a companion
paper we use our new machinery to create the first universal
majorization-minimization optimization algorithms: algorithms that iteratively
minimize an arbitrary loss using a majorizer that is derived automatically,
rather than by hand. We also show that our automatically-derived bounds can be
used for verified global optimization and numerical integration, and to prove
sharper versions of Jensen's inequality.Comment: Previous version has been split into 3 articles: arXiv:2308.00679,
arXiv:2308.00190, and this articl