Automatically Bounding the Taylor Remainder Series: Tighter Bounds and New Applications

Abstract

We present a new algorithm for automatically bounding the Taylor remainder series. In the special case of a scalar function f:Rβ†’Rf: \mathbb{R} \to \mathbb{R}, our algorithm takes as input a reference point x0x_0, trust region [a,b][a, b], and integer kβ‰₯1k \ge 1, and returns an interval II such that f(x)βˆ’βˆ‘i=0kβˆ’11i!f(i)(x0)(xβˆ’x0)i∈I(xβˆ’x0)kf(x) - \sum_{i=0}^{k-1} \frac {1} {i!} f^{(i)}(x_0) (x - x_0)^i \in I (x - x_0)^k for all x∈[a,b]x \in [a, b]. As in automatic differentiation, the function ff is provided to the algorithm in symbolic form, and must be composed of known atomic functions. At a high level, our algorithm has two steps. First, for a variety of commonly-used elementary functions (e.g., exp⁑\exp, log⁑\log), we use recently-developed theory to derive sharp polynomial upper and lower bounds on the Taylor remainder series. We then recursively combine the bounds for the elementary functions using an interval arithmetic variant of Taylor-mode automatic differentiation. Our algorithm can make efficient use of machine learning hardware accelerators, and we provide an open source implementation in JAX. We then turn our attention to applications. Most notably, in a companion paper we use our new machinery to create the first universal majorization-minimization optimization algorithms: algorithms that iteratively minimize an arbitrary loss using a majorizer that is derived automatically, rather than by hand. We also show that our automatically-derived bounds can be used for verified global optimization and numerical integration, and to prove sharper versions of Jensen's inequality.Comment: Previous version has been split into 3 articles: arXiv:2308.00679, arXiv:2308.00190, and this articl

    Similar works

    Full text

    thumbnail-image

    Available Versions