126 research outputs found
Computing hypergeometric functions rigorously
We present an efficient implementation of hypergeometric functions in
arbitrary-precision interval arithmetic. The functions , ,
and (or the Kummer -function) are supported for
unrestricted complex parameters and argument, and by extension, we cover
exponential and trigonometric integrals, error functions, Fresnel integrals,
incomplete gamma and beta functions, Bessel functions, Airy functions, Legendre
functions, Jacobi polynomials, complete elliptic integrals, and other special
functions. The output can be used directly for interval computations or to
generate provably correct floating-point approximations in any format.
Performance is competitive with earlier arbitrary-precision software, and
sometimes orders of magnitude faster. We also partially cover the generalized
hypergeometric function and computation of high-order parameter
derivatives.Comment: v2: corrected example in section 3.1; corrected timing data for case
E-G in section 8.5 (table 6, figure 2); adjusted paper siz
White Paper from Workshop on Large-scale Parallel Numerical Computing Technology (LSPANC 2020): HPC and Computer Arithmetic toward Minimal-Precision Computing
In numerical computations, precision of floating-point computations is a key
factor to determine the performance (speed and energy-efficiency) as well as
the reliability (accuracy and reproducibility). However, precision generally
plays a contrary role for both. Therefore, the ultimate concept for maximizing
both at the same time is the minimal-precision computing through
precision-tuning, which adjusts the optimal precision for each operation and
data. Several studies have been already conducted for it so far (e.g.
Precimoniuos and Verrou), but the scope of those studies is limited to the
precision-tuning alone. Hence, we aim to propose a broader concept of the
minimal-precision computing system with precision-tuning, involving both
hardware and software stack.
In 2019, we have started the Minimal-Precision Computing project to propose a
more broad concept of the minimal-precision computing system with
precision-tuning, involving both hardware and software stack. Specifically, our
system combines (1) a precision-tuning method based on Discrete Stochastic
Arithmetic (DSA), (2) arbitrary-precision arithmetic libraries, (3) fast and
accurate numerical libraries, and (4) Field-Programmable Gate Array (FPGA) with
High-Level Synthesis (HLS).
In this white paper, we aim to provide an overview of various technologies
related to minimal- and mixed-precision, to outline the future direction of the
project, as well as to discuss current challenges together with our project
members and guest speakers at the LSPANC 2020 workshop;
https://www.r-ccs.riken.jp/labs/lpnctrt/lspanc2020jan/
High-accuracy numerical integration methods for fractional order derivatives and integrals computations
In this paper the authors present highly accurate and remarkably efficient computational methods for fractional order derivatives and integrals applying Riemann-Liouville and Caputo formulae: the Gauss-Jacobi Quadrature with adopted weight function, the Double Exponential Formula, applying two arbitrary precision and exact rounding mathematical libraries (GNU GMP and GNU MPFR). Example fractional order derivatives and integrals of some elementary functions are calculated. Resulting accuracy is compared with accuracy achieved by applying widely known methods of numerical integration. Finally, presented methods are applied to solve Abel’s Integral equation (in Appendix)
Recommended from our members
Analysing and bounding numerical error in spiking neural network simulations
This study explores how numerical error occurs in simulations of spiking neural network models, and also how this error propagates through the simulation, changing its observed behaviour. The issue of non-reproducibility in parallel spiking neural network simulations is illustrated, and a method to bound all possible trajectories is discussed. The base method used in this study is known as mixed interval and affine arithmetic (mixed IA/AA), but some extra modifications are made to improve the tightness of the error bounds.
I introduce Arpra, my new software, which is an arbitrary precision range analysis library, based on the GNU MPFR library. It improves on other implementations by enabling computations in custom floating-point precisions, and reduces the overhead rounding error of mixed IA/AA by computing in extended precision internally. It also implements a new error trimming technique, which reduces the error term whilst preserving correct boundaries. Arpra also implements deviation term condensing functions, which can reduce the number of floating-point operations per function significantly. Arpra is tested by simulating the Hénon map dynamical system, and found to produce tighter ranges than those of INTLAB, an alternative mixed IA/AA implementation.
Arpra is used to bound the trajectories of fan-in spiking neural network simulations. Despite performing better than interval arithmetic, the mixed IA/AA method used by Arpra is shown to be inadequate for bounding the simulation trajectories, due to the highly nonlinear nature of spiking neural networks. A stability analysis of the neural network model is performed, and it is found that error boundaries are moderately tight in non-spiking regions of state space, where linear dynamics dominate, but error boundaries explode in spiking regions of state space, where nonlinear dynamics dominate
FIESTA 2: parallelizeable multiloop numerical calculations
The program FIESTA has been completely rewritten. Now it can be used not only
as a tool to evaluate Feynman integrals numerically, but also to expand Feynman
integrals automatically in limits of momenta and masses with the use of sector
decompositions and Mellin-Barnes representations. Other important improvements
to the code are complete parallelization (even to multiple computers),
high-precision arithmetics (allowing to calculate integrals which were undoable
before), new integrators and Speer sectors as a strategy, the possibility to
evaluate more general parametric integrals.Comment: 31 pages, 5 figure
The effects of finite precision on the simulation of the double pendulum
We use mathematics to study physical problems because abstracting the information allows us to better analyze what could happen given any range and combination of parameters. The problem is that for complicated systems mathematical analysis becomes extremely cumbersome. The only effective and reasonable way to study the behavior of such systems is to simulate the event on a computer. However, the fact that the set of floating-point numbers is finite and the fact that they are unevenly distributed over the real number line raises a number of concerns when trying to simulate systems with chaotic behavior. In this research we seek to gain a better understanding of the effects finite precision has on the solution to a chaotic dynamical system, specifically the double pendulum
Analysis and evaluation of Binary Cascade Iterative Refinement and comparison to other iterative refinement algorithms for solving linear systems
Iterative Refinement ist eine weitverbreitete Methode um die Rundungsfehler einer Lösung eines linearen Gleichungssystems zu verbessern. Die Kosten der iterativen Verbesserung sind sehr gering im Vergleich zu den Kosten der Matrixfaktorisierung, das Verfahren führt aber zu einem Ergebnis welches bis zur Maschinengenauigkeit korrekt sein kann. Es existieren viele Variationen des Standard Iterative Refinements, welche verschiedenen Arbeitsgenauigkeiten für die Berechnungen verwenden. Extra Precise Iterative Refinement verwendet eine höhere Genauigkeit, um das Ergebnis zu verbessern. Mixed Precision Iterative Refinement versucht die Vorteile von niedrigeren Genauigkeiten auszunutzen, um das Ergebnis zu berechnen und verwendet anschließend Iterative Refinement um die höhere Genauigkeit des Ergebnisses zu gewährleisten.
Der Fokus dieser Masterarbeit liegt auf dem Binary Cascade Iterative Refinement, welches die Arbeitsgenauigkeiten basierend auf den Eingabedaten wählt. Dieser Algorithmus beruht auf der Verwendung von beliebigen Genauigkeiten, welche nicht auf die IEEE Standarddatentypen beschränkt sind, welche von den meisten Hardwareherstellern unterstützt werden. Die Masterarbeit wird die Eigenschaften von BCIR analysieren und Experimente durchführen, welche diesen Algorithmus mit anderen Iterative Refinement Methoden vergleichen und besondere Aufmerksamkeit auf die numerische Genauigkeit und die Konvergenz der Verfahren legen.
Die beliebige Genauigkeit wird mit Hilfe der Software Bibliothek GNU MPFR implementiert. Die verschiedenen Genauigkeiten werden in Software simuliert und liefern daher keine aussagekräftigen Informationen über einen Performancegewinn oder -verlust durch die Verwendung der verschiedenen Genauigkeiten. Daher wird ein Performancemodel vorgestellt, um die Performance der verschiedenen Methoden miteinander vergleichen zu können. Dies wird Aufschluss über mögliche Performancegewinne geben.Iterative refinement is a widely used method to improve the round-off errors of a solution of a linear system. The cost of the iterative improvement is very low compared to the cost of the factorization of the matrix but results in a solution which can be accurate to machine precision. Many variations on the standard iterative refinement method exist, which use different working precisions to refine the solution. The extra precise iterative refinement can use extended precision to improve the result. The mixed precision iterative refinement tries to exploit the benefits of using lower precisions to compute a solution and then uses iterative refinement to achieve the higher precision accuracy.
The focus of this thesis will be the binary cascade iterative refinement, which chooses the working precisions according to the input data. This algorithm depends on arbitrary precision arithmetic to support working precisions outside the IEEE standard data types provided by most hardware vendors. The thesis will analyse the properties of BCIR and conduct experiments which will compare the algorithm to other iterative refinement methods and focus on the numerical accuracy and the convergence.
The arbitrary precision arithmetic will be implemented using the GNU MPFR software library. The simulated arbitrary precision does not provide accurate information about the gains and losses in performance due to the use of the different precisions. Therefore a performance model will be introduced in order to be able to compare the performance of the algorithms and to analyse the possible performance gains
- …