104 research outputs found

    Salsa: An Automatic Tool to Improve the Numerical Accuracy of Programs

    Get PDF
    This article describes Salsa, an automatic tool to improve the accuracy of the foating-point computations done in numerical codes. Based on static analysis methods by abstract interpretation, our tool takes as input an original program, applies to it a set of transformation rules and then generates a transformed program which is more accurate than the initial one. The original and the transformed programs are written in the same imperative language. This article is a concise description of former work on the techniques implemented in Salsa, extended with a presentation of the main software architecture, the inputs and outputs of the tool as well as experimental results obtained by applying our tool on a set of sample programs coming from embedded systems and numerical analysis

    Intra-procedural Optimization of the Numerical Accuracy of Programs

    Get PDF
    Numerical programs performing oating-point computations are very sensitive to the way formulas are written. These last years, several techniques have been proposed concerning the transformation of arithmetic expressions in order to improve their accuracy and, in this article, we go one step further by automatically transforming larger pieces of code containing assignments and control structures. We define a set of transformation rules allowing the generation, under certain conditions and in polynomial time, of larger expressions by performing limited formal computations, possibly among several iterations of a loop. These larger expressions are better suited to improve the numerical accuracy of the target variable. We use abstract interpretation-based static analysis techniques to over-approximate the roundoff errors in programs and during the transformation of expressions. A prototype has been implemented and experimental results are presented concerning classical numerical algorithm analysis and algorithm for embedded systems

    On the Impact of Numerical Accuracy Optimization on General Performances of Programs

    Get PDF
    The floating-point numbers used in computer programs are a finite approximation of real numbers. In practice, this approximation may introduce round-off errors and this can lead to catastrophic results. In previous work, we have proposed intraprocedural and interprocedural program transformations for numerical accuracy optimization. All these transformations have been implemented in our tool, Salsa. The experimental results applied on various programs either coming from embedded systems or numerical methods, show the efficiency of the transformation in terms of numerical accuracy improvement but also in terms of other criteria such as execution time and code size. This article studies the impact of program transformations for numerical accuracy specially in embedded systems on other efficiency parameters such as execution time, code size and accuracy of the other variables (these which are not chosen for optimization)

    Mixed Precision Tuning with Salsa

    Get PDF
    Precision tuning consists of finding the least floating-point formats enabling a program to compute some results with an accuracy requirement. In mixed precision, this problem has a huge combinatory since any value may have its own format. Precision tuning has given rise to the development of several tools that aim at guarantying a desired precision on the outputs of programs doing floating-point computations, by minimizing the initial, over-estimated, precision of the inputs and intermediary results. In this article, we present an extension of our tool for numerical accuracy, Salsa, which performs precision tuning. Originally, Salsa is a program transformation tool based on static analysis and which improves the accuracy of floating-point computations. We have extended Salsa with a precision tuning static analysis. We present experimental results showing the efficiency of this new feature as well as the additional gains that we obtain by performing Salsa’s program transformation before the precision tuning analysis. We experiment our tool on a set of programs coming from various domains like embedded systems and numerical analysis

    Data-types optimization for floating-point formats by program transformation

    Get PDF
    In floating-point arithmetic, a desirable property of computations is to be accurate, since in many industrial context small or large perturbations due to round-off errors may cause considerable damages. To cope with this matter of fact, we have developed a tool which corrects these errors by automatically transforming programs in a source to source manner. Our transformation, relying on static analysis by abstract abstraction, concerns pieces of code with assignments, conditionals and loops. By transforming programs, we can significantly optimize the numerical accuracy of computations by minimizing the error relatively to the exact result. An interesting side-effect of our technique is that more accurate computations may make it possible to use smaller data-types. In this article, we show that our transformed programs, executed in single precision, may compete with not transformed codes executed in double precision

    Transformation of a PID Controller for Numerical Accuracy

    Get PDF
    Numerical programs performing floating-point computations are very sensitive to the way formulas are written. Several techniques have been proposed concerning the transformation of expressions in order to improve their accuracy and now we aim at going a step further by automatically transforming larger pieces of code containing several assignments and control structures. This article presents a case study in this direction. We consider a PID controller and we transform its code in order to improve its accuracy. The experimental data obtained when we compare the different versions of the code (which are mathematically equivalent) show that those transformations have a significant impact on the accuracy of the computation

    Numerical program optimisation by automatic improvement of the accuracy of computations

    Get PDF
    Over the last decade, guaranteeing the accuracy of computations relying on the IEEE754 floating-point arithmetic has become increasingly complex. Failures, caused by small or large perturbations due to round-off errors, have been registered. To cope with this issue, we have developed a tool which corrects these errors by automatically transforming programs in a source to source manner. Our transformation, relying on static analysis by abstract abstraction, operates on pieces of code with assignments, conditionals and loops. By transforming programs, we can significantly optimize the numerical accuracy of computations by minimizing the error relatively to the exact result. In this article, we present two important desirable side-effects of our transformation. Firstly, we show that our transformed programs, executed in single precision, may compete with not transformed codes executed in double precision. Secondly, we show that optimizing the numerical accuracy of programs accelerates the convergence of numerical iterative methods. Both of these properties of our transformation are of great interest for numerical software

    Impact of Accuracy Optimization on the Convergence of Numerical Iterative Methods

    Get PDF
    Among other objectives, rewriting programs serves as a useful technique to improve numerical accuracy. However, this optimization is not intuitive and this is why we switch to automatic transformation techniques. We are interested in the optimization of numerical programs relying on the IEEE754 oating-point arithmetic. In this article, our main contribution is to study the impact of optimizing the numerical accuracy of programs on the time required by numerical iterative methods to converge. To emphasize the usefulness of our tool, we make it optimize several examples of numerical methods such as Jacobi's method, Newton-Raphson's method, etc. We show that significant speedups are obtained in terms of number of iterations, time and ops

    Improving the Numerical Accuracy of High Performance Computing Programs by Process Specialization

    Get PDF
    In high performance computing, nearly all the implementations and published experiments use floating-point arithmetic. However, since floating-point numbers are finite approximations of real numbers, it may result in hazards because of the accumulated errors. These round-off errors may cause damages whose gravity varies depending on the critical level of the application. To deal with this issue, we have developed a tool which improves the numerical accuracy of computations by automatically transforming programs in a source-to-source manner. Our transformation, relies on static analysis by abstract interpretation and operates on pieces of code with assignments, conditionals, loops, functions and arrays. In this article, we apply our techniques to optimize a parallel program representative of the high performance computing domain. Parallelism introduces new numerical accuracy problems due to the order of operations in this kind of systems. We are also interested in studying the compromise between execution time and numerical accuracy

    Transformation of a PID Controller for Numerical Accuracy

    Get PDF
    Numerical programs performing floating-point computations are very sensitive to the way formulas are written. Several techniques have been proposed concerning the transformation of expressions in order to improve their accuracy and now we aim at going a step further by automatically transforming larger pieces of code containing several assignments and control structures. This article presents a case study in this direction. We consider a PID controller and we transform its code in order to improve its accuracy. The experimental data obtained when we compare the different versions of the code (which are mathematically equivalent) show that those transformations have a significant impact on the accuracy of the computation
    • …
    corecore