330 research outputs found
The complexity of class polynomial computation via floating point approximations
We analyse the complexity of computing class polynomials, that are an
important ingredient for CM constructions of elliptic curves, via complex
floating point approximations of their roots. The heart of the algorithm is the
evaluation of modular functions in several arguments. The fastest one of the
presented approaches uses a technique devised by Dupont to evaluate modular
functions by Newton iterations on an expression involving the
arithmetic-geometric mean. It runs in time for any , where
is the CM discriminant and is the degree of the class polynomial.
Another fast algorithm uses multipoint evaluation techniques known from
symbolic computation; its asymptotic complexity is worse by a factor of . Up to logarithmic factors, this running time matches the size of the
constructed polynomials. The estimate also relies on a new result concerning
the complexity of enumerating the class group of an imaginary-quadratic order
and on a rigorously proven upper bound for the height of class polynomials
Recommended from our members
Software-based approximate computing for mathematical functions
The four arithmetic floating-point operations (+,−,÷and×) have been precisely specified in IEEE-754 since 1985, but the situation for floating-point mathematical libraries and even some hardware operations such as fused multiply-add is more nuanced as there are varying opinions on which standards should be followed and when it is acceptable to allow some error or when it is necessary to be correctly-rounded. Deterministic correctly-rounded elementary mathematical functions are important in many applications. Others are tolerant to some level of error and would benefit from less accurate, better-performing approximations. We found that, despite IEEE-754 (2008 and 2019 only)specifying that ‘recommended functions’ such as sin, cos or log should be correctly rounded, the mathematical libraries available through standard interfaces in popular programming languages provide neither correct-rounding nor maximally performing approximations, partly due to the differing accuracy requirements of these functions in conflicting standards provided for some languages, such as C. This dissertation seeks to explore the current methods used for the implementation of mathematical functions, show the error present in them and demonstrate methods to produce both low-cost correctly-rounded solutions and better approximations for specific use-cases. This is achieved by: First, exploring the error within existing mathematical libraries and examining how it is impacting existing applications and the development of programming language standards. We then make two contributions which address the accuracy and standard conformance problems that were found: 1) an approach for a correctly-rounded 32-bit implementation of the elementary functions with minimal additional performance cost on modern hardware; and 2) an approach for developing a better performing incorrectly-rounded solution for use when some error is acceptable and conforming with the IEEE-754 standard is not a requirement. For the
latter contribution, we introduce a tool for semi-automated generic code sensitivity analysis and approximation. Next, we target the creation of approximations for the standard activation functions used in neural networks. Identifying that significant time is spent in the computation of the activation functions, we generate approximations with different levels of error and better performance characteristics. These functions are then tested in standard neural networks to determine if the approximations have any detrimental effect on the output of the network. We show that, for many networks and activation functions, very coarse approximations are suitable replacements to train the networks equally well at a lower overall time cost. This dissertation makes original contributions to the area of approximate computing. We demonstrate new approaches to safe-approximation and justify approximate computation generally by showing that existing mathematical libraries are already suffering the downsides of approximation and latent error without fully exploiting the optimisation space available due to the existing tolerance to that error and showing that correctly-rounded solutions are possible without a significant performance impact for many 32-bit mathematical functions
Chebyshev Interpolation Polynomial-based Tools for Rigorous Computing
17 pagesInternational audiencePerforming numerical computations, yet being able to provide rigorous mathematical statements about the obtained result, is required in many domains like global optimization, ODE solving or integration. Taylor models, which associate to a function a pair made of a Taylor approximation polynomial and a rigorous remainder bound, are a widely used rigorous computation tool. This approach benefits from the advantages of numerical methods, but also gives the ability to make reliable statements about the approximated function. Despite the fact that approximation polynomials based on interpolation at Chebyshev nodes offer a quasi-optimal approximation to a function, together with several other useful features, an analogous to Taylor models, based on such polynomials, has not been yet well-established in the field of validated numerics. This paper presents a preliminary work for obtaining such interpolation polynomials together with validated interval bounds for approximating univariate functions. We propose two methods that make practical the use of this: one is based on a representation in Newton basis and the other uses Chebyshev polynomial basis. We compare the quality of the obtained remainders and the performance of the approaches to the ones provided by Taylor models
Formal Proofs for Nonlinear Optimization
We present a formally verified global optimization framework. Given a
semialgebraic or transcendental function and a compact semialgebraic domain
, we use the nonlinear maxplus template approximation algorithm to provide a
certified lower bound of over . This method allows to bound in a modular
way some of the constituents of by suprema of quadratic forms with a well
chosen curvature. Thus, we reduce the initial goal to a hierarchy of
semialgebraic optimization problems, solved by sums of squares relaxations. Our
implementation tool interleaves semialgebraic approximations with sums of
squares witnesses to form certificates. It is interfaced with Coq and thus
benefits from the trusted arithmetic available inside the proof assistant. This
feature is used to produce, from the certificates, both valid underestimators
and lower bounds for each approximated constituent. The application range for
such a tool is widespread; for instance Hales' proof of Kepler's conjecture
yields thousands of multivariate transcendental inequalities. We illustrate the
performance of our formal framework on some of these inequalities as well as on
examples from the global optimization literature.Comment: 24 pages, 2 figures, 3 table
Fast computation of the matrix exponential for a Toeplitz matrix
The computation of the matrix exponential is a ubiquitous operation in
numerical mathematics, and for a general, unstructured matrix it
can be computed in operations. An interesting problem arises
if the input matrix is a Toeplitz matrix, for example as the result of
discretizing integral equations with a time invariant kernel. In this case it
is not obvious how to take advantage of the Toeplitz structure, as the
exponential of a Toeplitz matrix is, in general, not a Toeplitz matrix itself.
The main contribution of this work are fast algorithms for the computation of
the Toeplitz matrix exponential. The algorithms have provable quadratic
complexity if the spectrum is real, or sectorial, or more generally, if the
imaginary parts of the rightmost eigenvalues do not vary too much. They may be
efficient even outside these spectral constraints. They are based on the
scaling and squaring framework, and their analysis connects classical results
from rational approximation theory to matrices of low displacement rank. As an
example, the developed methods are applied to Merton's jump-diffusion model for
option pricing
On the use of the Infinity Computer architecture to set up a dynamic precision floating-point arithmetic
We devise a variable precision floating-point arithmetic by exploiting the framework provided by the Infinity Computer. This is a computational platform implementing the Infinity Arithmetic system, a positional numeral system which can handle both infinite and infinitesimal quantities expressed using the positive and negative finite or infinite powers of the radix 1. The computational features offered by the Infinity Computer allow us to dynamically change the accuracy of representation and floating-point operations during the flow of a computation. When suitably implemented, this possibility turns out to be particularly advantageous when solving ill-conditioned problems. In fact, compared with a standard multi-precision arithmetic, here the accuracy is improved only when needed, thus not affecting that much the overall computational effort. An illustrative example about the solution of a nonlinear equation is also presented
Efficient implementation of the Hardy-Ramanujan-Rademacher formula
We describe how the Hardy-Ramanujan-Rademacher formula can be implemented to
allow the partition function to be computed with softly optimal
complexity and very little overhead. A new implementation
based on these techniques achieves speedups in excess of a factor 500 over
previously published software and has been used by the author to calculate
, an exponent twice as large as in previously reported
computations.
We also investigate performance for multi-evaluation of , where our
implementation of the Hardy-Ramanujan-Rademacher formula becomes superior to
power series methods on far denser sets of indices than previous
implementations. As an application, we determine over 22 billion new
congruences for the partition function, extending Weaver's tabulation of 76,065
congruences.Comment: updated version containing an unconditional complexity proof;
accepted for publication in LMS Journal of Computation and Mathematic
- …