129,821 research outputs found

    Bidiagonal decompositions of Vandermonde-type matrices of arbitrary rank

    Get PDF
    We present a method to derive new explicit expressions for bidiagonal decompositions of Vandermonde and related matrices such as the (q-, h-) Bernstein-Vandermonde ones, among others. These results generalize the existing expressions for nonsingular matrices to matrices of arbitrary rank. For totally nonnegative matrices of the above classes, the new decompositions can be computed efficiently and to high relative accuracy componentwise in floating point arithmetic. In turn, matrix computations (e.g., eigenvalue computation) can also be performed efficiently and to high relative accuracy

    Toward accurate polynomial evaluation in rounded arithmetic

    Get PDF
    Given a multivariate real (or complex) polynomial pp and a domain D\cal D, we would like to decide whether an algorithm exists to evaluate p(x)p(x) accurately for all x∈Dx \in {\cal D} using rounded real (or complex) arithmetic. Here ``accurately'' means with relative error less than 1, i.e., with some correct leading digits. The answer depends on the model of rounded arithmetic: We assume that for any arithmetic operator op(a,b)op(a,b), for example a+ba+b or a⋅ba \cdot b, its computed value is op(a,b)⋅(1+δ)op(a,b) \cdot (1 + \delta), where ∣δ∣| \delta | is bounded by some constant ϵ\epsilon where 0<ϵ≪10 < \epsilon \ll 1, but δ\delta is otherwise arbitrary. This model is the traditional one used to analyze the accuracy of floating point algorithms.Our ultimate goal is to establish a decision procedure that, for any pp and D\cal D, either exhibits an accurate algorithm or proves that none exists. In contrast to the case where numbers are stored and manipulated as finite bit strings (e.g., as floating point numbers or rational numbers) we show that some polynomials pp are impossible to evaluate accurately. The existence of an accurate algorithm will depend not just on pp and D\cal D, but on which arithmetic operators and which constants are are available and whether branching is permitted. Toward this goal, we present necessary conditions on pp for it to be accurately evaluable on open real or complex domains D{\cal D}. We also give sufficient conditions, and describe progress toward a complete decision procedure. We do present a complete decision procedure for homogeneous polynomials pp with integer coefficients, {\cal D} = \C^n, and using only the arithmetic operations ++, −- and ⋅\cdot.Comment: 54 pages, 6 figures; refereed version; to appear in Foundations of Computational Mathematics: Santander 2005, Cambridge University Press, March 200

    Accurate and Efficient Expression Evaluation and Linear Algebra

    Full text link
    We survey and unify recent results on the existence of accurate algorithms for evaluating multivariate polynomials, and more generally for accurate numerical linear algebra with structured matrices. By "accurate" we mean that the computed answer has relative error less than 1, i.e., has some correct leading digits. We also address efficiency, by which we mean algorithms that run in polynomial time in the size of the input. Our results will depend strongly on the model of arithmetic: Most of our results will use the so-called Traditional Model (TM). We give a set of necessary and sufficient conditions to decide whether a high accuracy algorithm exists in the TM, and describe progress toward a decision procedure that will take any problem and provide either a high accuracy algorithm or a proof that none exists. When no accurate algorithm exists in the TM, it is natural to extend the set of available accurate operations by a library of additional operations, such as x+y+zx+y+z, dot products, or indeed any enumerable set which could then be used to build further accurate algorithms. We show how our accurate algorithms and decision procedure for finding them extend to this case. Finally, we address other models of arithmetic, and the relationship between (im)possibility in the TM and (in)efficient algorithms operating on numbers represented as bit strings.Comment: 49 pages, 6 figures, 1 tabl

    Numerical stability of descent methods for solving linear equations

    Get PDF
    In this paper we perform a round-off error analysis of descent methods for solving a liner systemAx=b, whereA is supposed to be symmetric and positive definite. This leads to a general result on the attainable accuracy of the computed sequence {xi} when the method is performed in floating point arithmetic. The general theory is applied to the Gauss-Southwell method and the gradient method. Both methods appear to be well-behaved which means that these methods compute an approximationxi to the exact solutionA−1b which is the exact solution of a slightly perturbed linear system, i.e. (A+δA)xi=b, ‖δA‖ of order ɛ‖A‖, where ɛ is the relative machine precision and ‖·‖ denotes the spectral norm

    On the computation of poisson probabilities

    Get PDF
    The Poisson distribution is a distribution commonly used in statistics. It also plays a central role in the analysis of the transient behaviour of continuous-time Markov chains. Several methods have been devised for evaluating using floating-point arithmetic the probability mass function (PMF) of the Poisson distribution. Restricting our attention to published methods intended for the computation of a single probability or a few of them, we show that neither of them is completely satisfactory in terms of accuracy. With that motivation, we develop a new method for the evaluation of the PDF of the Poisson distribution. The method is intended for the computation of a single probability or a few of them. Numerical experimentation illustrates that the method can be more accurate and slightly faster than the previous methods. Besides, the method comes with guaranteed approximation relative error.Postprint (author's final draft

    The role of the left intraparietal sulcus in the relationship between symbolic number processing and children\u27s arithmetic competence

    Get PDF
    The neural foundations of arithmetic learning are not well understood. While behavioral studies have revealed relationships between symbolic number processing and individual differences in children\u27s arithmetic performance, the neurocognitive mechanisms that bind symbolic number processing and arithmetic are unknown. The current fMRI study investigated the relationship between children\u27s brain activation during symbolic number comparison (Arabic digits) and individual differences in arithmetic fluency. A significant correlation was found between the numerical ratio effect on reaction times and accuracy and children\u27s arithmetic scores. Furthermore, children with a stronger neural ratio effect in the left intraparietal sulcus (IPS) during symbolic number processing exhibited higher arithmetic scores. Previous research has demonstrated that activation of the IPS during numerical magnitude processing increases over the course of development, and that the left IPS plays an important role in symbolic number processing. The present findings extend this knowledge to show that children with more mature response modulation of the IPS during symbolic number processing exhibit higher arithmetic competence. These results suggest that the left IPS is a key neural substrate for the relationship between the relative of precision of the representation of numerical magnitude and school-level arithmetic competence. © 2012 Elsevier Ltd

    The role of the left intraparietal sulcus in the relationship between symbolic number processing and children\u27s arithmetic competence

    Get PDF
    The neural foundations of arithmetic learning are not well understood. While behavioral studies have revealed relationships between symbolic number processing and individual differences in children\u27s arithmetic performance, the neurocognitive mechanisms that bind symbolic number processing and arithmetic are unknown. The current fMRI study investigated the relationship between children\u27s brain activation during symbolic number comparison (Arabic digits) and individual differences in arithmetic fluency. A significant correlation was found between the numerical ratio effect on reaction times and accuracy and children\u27s arithmetic scores. Furthermore, children with a stronger neural ratio effect in the left intraparietal sulcus (IPS) during symbolic number processing exhibited higher arithmetic scores. Previous research has demonstrated that activation of the IPS during numerical magnitude processing increases over the course of development, and that the left IPS plays an important role in symbolic number processing. The present findings extend this knowledge to show that children with more mature response modulation of the IPS during symbolic number processing exhibit higher arithmetic competence. These results suggest that the left IPS is a key neural substrate for the relationship between the relative of precision of the representation of numerical magnitude and school-level arithmetic competence. © 2012 Elsevier Ltd

    Sharp error bounds for complex floating-point inversion

    Get PDF
    International audienceWe study the accuracy of the classic algorithm for inverting a complex number given by its real and imaginary parts as floating-point numbers. Our analyses are done in binary floating-point arithmetic, with an unbounded exponent range and in precision pp; we also assume that the basic arithmetic operations (++, −-, ×\times, //) are rounded to nearest, so that the roundoff unit is u=2−pu = 2^{-p}. We bound the largest relative error in the computed inverse either in the componentwise or in the normwise sense. We prove the componentwise relative error bound 3u3u for the complex inversion algorithm (assuming p≥4p \ge 4), and we show that this bound is asymptotically optimal (as p→∞p\to \infty) when pp is even, and sharp when using one of the basic IEEE 754 binary formats with an odd precision (p=53,113p=53,113). This componentwise bound obviously leads to the same bound 3u3u for the normwise relative error. However, we prove that the smaller bound 2.707131u2.707131u holds (assuming p≥24p \ge 24) for the normwise relative error, and we illustrate the sharpness of this bound for the basic IEEE 754 binary formats (p=24,53,113p=24, 53, 113) using numerical examples
    • …
    corecore