50 research outputs found

    Compensated evaluation of tensor product surfaces in CAGD

    Get PDF
    In computer-aided geometric design, a polynomial surface is usually represented in BĂ©zier form. The usual form of evaluating such a surface is by using an extension of the de Casteljau algorithm. Using error-free transformations, a compensated version of this algorithm is presented, which improves the usual algorithm in terms of accuracy. A forward error analysis illustrating this fact is developed

    Algorithm 960: Polynomial: An object-oriented matlab library of fast and efficient algorithms for polynomials

    Get PDF
    The design and implementation of a Matlab object-oriented software library for working with polynomials is presented. The construction and evaluation of polynomials in Bernstein form are motivated and justified. Efficient constructions for the coefficients of a polynomial in Bernstein form when the polynomial is not given with this representation are provided. The presented adaptive evaluation algorithm uses the VS (Volk and Schumaker) algorithm, the de Casteljau algorithm, and a compensated VS algorithm. In addition, we have completed the library with other algorithms to perform other usual operations with polynomials in Bernstein form

    Computing floating-point logarithms with fixed-point operations

    Get PDF
    International audienceElementary functions from the mathematical library input and output floating-point numbers. However it is possible to implement them purely using integer/fixed-point arithmetic. This option was not attractive between 1985 and 2005, because mainstream processor hardware supported 64-bit floating-point, but only 32-bit integers. Besides, conversions between floating-point and integer were costly. This has changed in recent years, in particular with the generalization of native 64-bit integer support. The purpose of this article is therefore to reevaluate the relevance of computing floating-point functions in fixed-point. For this, several variants of the double-precision logarithm function are implemented and evaluated. Formulating the problem as a fixed-point one is easy after the range has been (classically) reduced. Then, 64-bit integers provide slightly more accuracy than 53-bit mantissa, which helps speed up the evaluation. Finally, multi-word arithmetic, critical for accurate implementations, is much faster in fixed-point, and natively supported by recent compilers. Novel techniques of argument reduction and rounding test are introduced in this context. Thanks to all this, a purely integer implementation of the correctly rounded double-precision logarithm outperforms the previous state of the art, with the worst-case execution time reduced by a factor 5. This work also introduces variants of the logarithm that input a floating-point number and output the result in fixed-point. These are shown to be both more accurate and more efficient than the traditional floating-point functions for some applications

    Lossy Polynomial Datapath Synthesis

    No full text
    The design of the compute elements of hardware, its datapath, plays a crucial role in determining the speed, area and power consumption of a device. The building blocks of datapath are polynomial in nature. Research into the implementation of adders and multipliers has a long history and developments in this area will continue. Despite such efficient building block implementations, correctly determining the necessary precision of each building block within a design is a challenge. It is typical that standard or uniform precisions are chosen, such as the IEEE floating point precisions. The hardware quality of the datapath is inextricably linked to the precisions of which it is composed. There is, however, another essential element that determines hardware quality, namely that of the accuracy of the components. If one were to implement each of the official IEEE rounding modes, significant differences in hardware quality would be found. But in the same fashion that standard precisions may be unnecessarily chosen, it is typical that components may be constructed to return one of these correctly rounded results, where in fact such accuracy is far from necessary. Unfortunately if a lesser accuracy is permissible then the techniques that exist to reduce hardware implementation cost by exploiting such freedom invariably produce an error with extremely difficult to determine properties. This thesis addresses the problem of how to construct hardware to efficiently implement fixed and floating-point polynomials while exploiting a global error freedom. This is a form of lossy synthesis. The fixed-point contributions include resource minimisation when implementing mutually exclusive polynomials, the construction of minimal lossy components with guaranteed worst case error and a technique for efficient composition of such components. Contributions are also made to how a floating-point polynomial can be implemented with guaranteed relative error.Open Acces

    Impact of Accuracy Optimization on the Convergence of Numerical Iterative Methods

    Get PDF
    Among other objectives, rewriting programs serves as a useful technique to improve numerical accuracy. However, this optimization is not intuitive and this is why we switch to automatic transformation techniques. We are interested in the optimization of numerical programs relying on the IEEE754 oating-point arithmetic. In this article, our main contribution is to study the impact of optimizing the numerical accuracy of programs on the time required by numerical iterative methods to converge. To emphasize the usefulness of our tool, we make it optimize several examples of numerical methods such as Jacobi's method, Newton-Raphson's method, etc. We show that significant speedups are obtained in terms of number of iterations, time and ops

    Numerical program optimisation by automatic improvement of the accuracy of computations

    Get PDF
    Over the last decade, guaranteeing the accuracy of computations relying on the IEEE754 floating-point arithmetic has become increasingly complex. Failures, caused by small or large perturbations due to round-off errors, have been registered. To cope with this issue, we have developed a tool which corrects these errors by automatically transforming programs in a source to source manner. Our transformation, relying on static analysis by abstract abstraction, operates on pieces of code with assignments, conditionals and loops. By transforming programs, we can significantly optimize the numerical accuracy of computations by minimizing the error relatively to the exact result. In this article, we present two important desirable side-effects of our transformation. Firstly, we show that our transformed programs, executed in single precision, may compete with not transformed codes executed in double precision. Secondly, we show that optimizing the numerical accuracy of programs accelerates the convergence of numerical iterative methods. Both of these properties of our transformation are of great interest for numerical software

    Proceedings of the 7th Conference on Real Numbers and Computers (RNC'7)

    Get PDF
    These are the proceedings of RNC7
    corecore