28,987 research outputs found
Acceleration of generalized hypergeometric functions through precise remainder asymptotics
We express the asymptotics of the remainders of the partial sums {s_n} of the
generalized hypergeometric function q+1_F_q through an inverse power series z^n
n^l \sum_k c_k/n^k, where the exponent l and the asymptotic coefficients {c_k}
may be recursively computed to any desired order from the hypergeometric
parameters and argument. From this we derive a new series acceleration
technique that can be applied to any such function, even with complex
parameters and at the branch point z=1. For moderate parameters (up to
approximately ten) a C implementation at fixed precision is very effective at
computing these functions; for larger parameters an implementation in higher
than machine precision would be needed. Even for larger parameters, however,
our C implementation is able to correctly determine whether or not it has
converged; and when it converges, its estimate of its error is accurate.Comment: 36 pages, 6 figures, LaTeX2e. Fixed sign error in Eq. (2.28), added
several references, added comparison to other methods, and added discussion
of recursion stabilit
Parallel Algorithms for Summing Floating-Point Numbers
The problem of exactly summing n floating-point numbers is a fundamental
problem that has many applications in large-scale simulations and computational
geometry. Unfortunately, due to the round-off error in standard floating-point
operations, this problem becomes very challenging. Moreover, all existing
solutions rely on sequential algorithms which cannot scale to the huge datasets
that need to be processed.
In this paper, we provide several efficient parallel algorithms for summing n
floating point numbers, so as to produce a faithfully rounded floating-point
representation of the sum. We present algorithms in PRAM, external-memory, and
MapReduce models, and we also provide an experimental analysis of our MapReduce
algorithms, due to their simplicity and practical efficiency.Comment: Conference version appears in SPAA 201
Linearly scaling direct method for accurately inverting sparse banded matrices
In many problems in Computational Physics and Chemistry, one finds a special
kind of sparse matrices, termed "banded matrices". These matrices, which are
defined as having non-zero entries only within a given distance from the main
diagonal, need often to be inverted in order to solve the associated linear
system of equations. In this work, we introduce a new O(n) algorithm for
solving such a system, being n X n the size of the matrix. We produce the
analytical recursive expressions that allow to directly obtain the solution, as
well as the pseudocode for its computer implementation. Moreover, we review the
different options for possibly parallelizing the method, we describe the
extension to deal with matrices that are banded plus a small number of non-zero
entries outside the band, and we use the same ideas to produce a method for
obtaining the full inverse matrix. Finally, we show that the New Algorithm is
competitive, both in accuracy and in numerical efficiency, when compared to a
standard method based in Gaussian elimination. We do this using sets of large
random banded matrices, as well as the ones that appear when one tries to solve
the 1D Poisson equation by finite differences.Comment: 24 pages, 5 figures, submitted to J. Comp. Phy
- …