129,821 research outputs found
Recommended from our members
How do incorrect results change the processing of arithmetic information? Evidence from a divided visual field experiment
Despite several recent important developments in understanding numerical processing of both isolated numbers and numbers in the context of arithmetic equations, the relative impact of congruency on high, compared to low, level processing remains unclear. The current study investigated hemispheric differences in the processing of arithmetic material, as a function of semantic and perceptual congruency, using a delayed answer verification task and divided visual field paradigm. A total of 37 participants (22 females and 15 males, mean age 30.06, SD 9.78) were presented unilaterally or bilaterally with equation results that were either correct or incorrect and had a consistent or inconsistent numerical notation. Statistical analyses showed no visual field differences in a notation consistency task, whereas when judgements had to be made on mathematical accuracy there was a right visual field advantage for incorrect equations that were notation consistent. These results reveal a clear differential processing of arithmetic information by the two cerebral hemispheres with a special emphasis on erroneous calculations. Faced with incorrect results and with a consistent numerical notation, the left hemisphere outperforms its right counterpart in making mathematical accuracy decisions
Bidiagonal decompositions of Vandermonde-type matrices of arbitrary rank
We present a method to derive new explicit expressions for bidiagonal decompositions of Vandermonde and related matrices such as the (q-, h-) Bernstein-Vandermonde ones, among others. These results generalize the existing expressions for nonsingular matrices to matrices of arbitrary rank. For totally nonnegative matrices of the above classes, the new decompositions can be computed efficiently and to high relative accuracy componentwise in floating point arithmetic. In turn, matrix computations (e.g., eigenvalue computation) can also be performed efficiently and to high relative accuracy
Toward accurate polynomial evaluation in rounded arithmetic
Given a multivariate real (or complex) polynomial and a domain ,
we would like to decide whether an algorithm exists to evaluate
accurately for all using rounded real (or complex) arithmetic.
Here ``accurately'' means with relative error less than 1, i.e., with some
correct leading digits. The answer depends on the model of rounded arithmetic:
We assume that for any arithmetic operator , for example or , its computed value is , where is bounded by some constant where , but
is otherwise arbitrary. This model is the traditional one used to
analyze the accuracy of floating point algorithms.Our ultimate goal is to
establish a decision procedure that, for any and , either exhibits
an accurate algorithm or proves that none exists. In contrast to the case where
numbers are stored and manipulated as finite bit strings (e.g., as floating
point numbers or rational numbers) we show that some polynomials are
impossible to evaluate accurately. The existence of an accurate algorithm will
depend not just on and , but on which arithmetic operators and
which constants are are available and whether branching is permitted. Toward
this goal, we present necessary conditions on for it to be accurately
evaluable on open real or complex domains . We also give sufficient
conditions, and describe progress toward a complete decision procedure. We do
present a complete decision procedure for homogeneous polynomials with
integer coefficients, {\cal D} = \C^n, and using only the arithmetic
operations , and .Comment: 54 pages, 6 figures; refereed version; to appear in Foundations of
Computational Mathematics: Santander 2005, Cambridge University Press, March
200
Accurate and Efficient Expression Evaluation and Linear Algebra
We survey and unify recent results on the existence of accurate algorithms
for evaluating multivariate polynomials, and more generally for accurate
numerical linear algebra with structured matrices. By "accurate" we mean that
the computed answer has relative error less than 1, i.e., has some correct
leading digits. We also address efficiency, by which we mean algorithms that
run in polynomial time in the size of the input. Our results will depend
strongly on the model of arithmetic: Most of our results will use the so-called
Traditional Model (TM). We give a set of necessary and sufficient conditions to
decide whether a high accuracy algorithm exists in the TM, and describe
progress toward a decision procedure that will take any problem and provide
either a high accuracy algorithm or a proof that none exists. When no accurate
algorithm exists in the TM, it is natural to extend the set of available
accurate operations by a library of additional operations, such as , dot
products, or indeed any enumerable set which could then be used to build
further accurate algorithms. We show how our accurate algorithms and decision
procedure for finding them extend to this case. Finally, we address other
models of arithmetic, and the relationship between (im)possibility in the TM
and (in)efficient algorithms operating on numbers represented as bit strings.Comment: 49 pages, 6 figures, 1 tabl
Numerical stability of descent methods for solving linear equations
In this paper we perform a round-off error analysis of descent methods for solving a liner systemAx=b, whereA is supposed to be symmetric and positive definite. This leads to a general result on the attainable accuracy of the computed sequence {xi} when the method is performed in floating point arithmetic. The general theory is applied to the Gauss-Southwell method and the gradient method. Both methods appear to be well-behaved which means that these methods compute an approximationxi to the exact solutionA−1b which is the exact solution of a slightly perturbed linear system, i.e. (A+δA)xi=b, ‖δA‖ of order ɛ‖A‖, where ɛ is the relative machine precision and ‖·‖ denotes the spectral norm
On the computation of poisson probabilities
The Poisson distribution is a distribution commonly used in statistics. It also plays a central role in the analysis of the transient behaviour of continuous-time Markov chains. Several methods have been devised for evaluating using floating-point arithmetic the probability mass function (PMF) of the Poisson distribution. Restricting our attention to published methods intended for the computation of a single probability or a few of them, we show that neither of them is completely satisfactory in terms of accuracy. With that motivation, we develop a new method for the evaluation of the PDF of the Poisson distribution. The method is intended for the computation of a single probability or a few of them. Numerical experimentation illustrates that the method can be more accurate and slightly faster than the previous methods. Besides, the method comes with guaranteed approximation relative error.Postprint (author's final draft
The role of the left intraparietal sulcus in the relationship between symbolic number processing and children\u27s arithmetic competence
The neural foundations of arithmetic learning are not well understood. While behavioral studies have revealed relationships between symbolic number processing and individual differences in children\u27s arithmetic performance, the neurocognitive mechanisms that bind symbolic number processing and arithmetic are unknown. The current fMRI study investigated the relationship between children\u27s brain activation during symbolic number comparison (Arabic digits) and individual differences in arithmetic fluency. A significant correlation was found between the numerical ratio effect on reaction times and accuracy and children\u27s arithmetic scores. Furthermore, children with a stronger neural ratio effect in the left intraparietal sulcus (IPS) during symbolic number processing exhibited higher arithmetic scores. Previous research has demonstrated that activation of the IPS during numerical magnitude processing increases over the course of development, and that the left IPS plays an important role in symbolic number processing. The present findings extend this knowledge to show that children with more mature response modulation of the IPS during symbolic number processing exhibit higher arithmetic competence. These results suggest that the left IPS is a key neural substrate for the relationship between the relative of precision of the representation of numerical magnitude and school-level arithmetic competence. © 2012 Elsevier Ltd
The role of the left intraparietal sulcus in the relationship between symbolic number processing and children\u27s arithmetic competence
The neural foundations of arithmetic learning are not well understood. While behavioral studies have revealed relationships between symbolic number processing and individual differences in children\u27s arithmetic performance, the neurocognitive mechanisms that bind symbolic number processing and arithmetic are unknown. The current fMRI study investigated the relationship between children\u27s brain activation during symbolic number comparison (Arabic digits) and individual differences in arithmetic fluency. A significant correlation was found between the numerical ratio effect on reaction times and accuracy and children\u27s arithmetic scores. Furthermore, children with a stronger neural ratio effect in the left intraparietal sulcus (IPS) during symbolic number processing exhibited higher arithmetic scores. Previous research has demonstrated that activation of the IPS during numerical magnitude processing increases over the course of development, and that the left IPS plays an important role in symbolic number processing. The present findings extend this knowledge to show that children with more mature response modulation of the IPS during symbolic number processing exhibit higher arithmetic competence. These results suggest that the left IPS is a key neural substrate for the relationship between the relative of precision of the representation of numerical magnitude and school-level arithmetic competence. © 2012 Elsevier Ltd
Sharp error bounds for complex floating-point inversion
International audienceWe study the accuracy of the classic algorithm for inverting a complex number given by its real and imaginary parts as floating-point numbers. Our analyses are done in binary floating-point arithmetic, with an unbounded exponent range and in precision ; we also assume that the basic arithmetic operations (, , , ) are rounded to nearest, so that the roundoff unit is . We bound the largest relative error in the computed inverse either in the componentwise or in the normwise sense. We prove the componentwise relative error bound for the complex inversion algorithm (assuming ), and we show that this bound is asymptotically optimal (as ) when is even, and sharp when using one of the basic IEEE 754 binary formats with an odd precision (). This componentwise bound obviously leads to the same bound for the normwise relative error. However, we prove that the smaller bound holds (assuming ) for the normwise relative error, and we illustrate the sharpness of this bound for the basic IEEE 754 binary formats () using numerical examples
- …