120 research outputs found

    A Hybrid Denotational Semantics for Hybrid Systems -- Extended Version

    Get PDF
    27 pagesIn this article, we present a model and a denotational semantics for hybrid systems made of a continuous and a discrete subsystem. Our model is designed so that it may be easily used for modeling large, existing, critical embedded applications, which is a first step toward their validation. The discrete subsystem is modeled by a program written in an extension of an imperative language and the continuous subsystem is modeled by differential equations. We give to both subsystems a denotational semantics inspired by what is usually done for the semantics of computer programs and then we show how the semantics of the whole system is deduced from the semantics of its two components. The semantics of the continuous system is computed as the fix-point of a modified Picard operator which increases the information content at each step. This fix-point is computed as the supremum of a sequence of approximations and we show that this supremum exists and is the solution of a differential equation using Keye Martin's measurement theory. The semantics of the discrete system is given as a classical denotational semantics, except that special denotations are given for the actions of sensors and/or actuators

    Automatic Source-to-Source Error Compensation of Floating-Point Programs

    Get PDF
    International audienceNumerical programs with IEEE 754 floating-point computations may suffer from inaccuracies since finite precision arithmetic is an approximation of real arithmetic. Solutions that reduce the loss of accuracy are available as, for instance, compensated algorithms, more precise computation with double-double or similar libraries. Our objective is to automatically improve the numerical quality of a numerical program with the smallest impact on its performances. We define and implement source code transformation to derive automatically compensated programs. We present several experimental results to compare the transformed programs and existing solutions. The transformed programs are as accurate and efficient than the implementations of compensated algorithms when the latter exist

    Mixed Precision Tuning with Salsa

    Get PDF
    Precision tuning consists of finding the least floating-point formats enabling a program to compute some results with an accuracy requirement. In mixed precision, this problem has a huge combinatory since any value may have its own format. Precision tuning has given rise to the development of several tools that aim at guarantying a desired precision on the outputs of programs doing floating-point computations, by minimizing the initial, over-estimated, precision of the inputs and intermediary results. In this article, we present an extension of our tool for numerical accuracy, Salsa, which performs precision tuning. Originally, Salsa is a program transformation tool based on static analysis and which improves the accuracy of floating-point computations. We have extended Salsa with a precision tuning static analysis. We present experimental results showing the efficiency of this new feature as well as the additional gains that we obtain by performing Salsa’s program transformation before the precision tuning analysis. We experiment our tool on a set of programs coming from various domains like embedded systems and numerical analysis

    Salsa: An Automatic Tool to Improve the Numerical Accuracy of Programs

    Get PDF
    This article describes Salsa, an automatic tool to improve the accuracy of the foating-point computations done in numerical codes. Based on static analysis methods by abstract interpretation, our tool takes as input an original program, applies to it a set of transformation rules and then generates a transformed program which is more accurate than the initial one. The original and the transformed programs are written in the same imperative language. This article is a concise description of former work on the techniques implemented in Salsa, extended with a presentation of the main software architecture, the inputs and outputs of the tool as well as experimental results obtained by applying our tool on a set of sample programs coming from embedded systems and numerical analysis

    On the Impact of Numerical Accuracy Optimization on General Performances of Programs

    Get PDF
    The floating-point numbers used in computer programs are a finite approximation of real numbers. In practice, this approximation may introduce round-off errors and this can lead to catastrophic results. In previous work, we have proposed intraprocedural and interprocedural program transformations for numerical accuracy optimization. All these transformations have been implemented in our tool, Salsa. The experimental results applied on various programs either coming from embedded systems or numerical methods, show the efficiency of the transformation in terms of numerical accuracy improvement but also in terms of other criteria such as execution time and code size. This article studies the impact of program transformations for numerical accuracy specially in embedded systems on other efficiency parameters such as execution time, code size and accuracy of the other variables (these which are not chosen for optimization)

    Improving the numerical accuracy of programs by automatic transformation

    Get PDF
    The dangers of programs performing floatingpoint computations are well known. This is due to the sensitivity of the results to the way formulæ are written. These last years, several techniques have been proposed concerning the transformation of arithmetic expressions in order to improve their numerical accuracy and, in this article, we go one step further by automatically transforming larger pieces of code containing assignments and control structures. We define a set of transformation rules allowing the generation, under certain conditions and in polynomial time, of larger expressions by performing limited formal computations, possibly among several iterations of a loop. These larger expressions are better suited to improve, by reparsing, the numerical accuracy of the program results. We use abstract interpretation-based static analysis techniques to over-approximate the round-off errors in programs and during the transformation of expressions. A tool has been implemented and experimental results are presented concerning classical numerical algorithms and algorithms for embedded systems

    Numerical Accuracy Improvement of Programs: Principles and Experiments

    Get PDF
    In general, the correctness of numerical computations of programs based on floating-point arithmetic is not intuitive and developers hope to compute an accurate result without guaranty. To solve this problem, we procceed by automatic source to source transformation of programs to improve their numerical accuracy

    Numerical Accuracy Improvement by Interprocedural Program Transformation

    Get PDF
    Floating-point numbers are used to approximate the exact real numbers in a wide range of domains like numerical simulations, embedded software, etc. However, floating-point numbers are a finite approximation of real numbers. In practice, this approximation may introduce round-off errors and this can lead to catastrophic results. To cope with this issue, we have developed a tool which corrects partly these round-off errors and which consequently improves the numerical accuracy of computations by automatically transforming programs in a source to source manner. Our transformation, relies on static analysis by abstract interpretation and operates on pieces of code with assignments, conditionals and loops. In former work, we have focused on the intraprocedural transformation of programs and, in this article, we introduce the interprocedural transformation to improve accuracy

    Numerical program optimisation by automatic improvement of the accuracy of computations

    Get PDF
    Over the last decade, guaranteeing the accuracy of computations relying on the IEEE754 floating-point arithmetic has become increasingly complex. Failures, caused by small or large perturbations due to round-off errors, have been registered. To cope with this issue, we have developed a tool which corrects these errors by automatically transforming programs in a source to source manner. Our transformation, relying on static analysis by abstract abstraction, operates on pieces of code with assignments, conditionals and loops. By transforming programs, we can significantly optimize the numerical accuracy of computations by minimizing the error relatively to the exact result. In this article, we present two important desirable side-effects of our transformation. Firstly, we show that our transformed programs, executed in single precision, may compete with not transformed codes executed in double precision. Secondly, we show that optimizing the numerical accuracy of programs accelerates the convergence of numerical iterative methods. Both of these properties of our transformation are of great interest for numerical software

    Impact of Accuracy Optimization on the Convergence of Numerical Iterative Methods

    Get PDF
    Among other objectives, rewriting programs serves as a useful technique to improve numerical accuracy. However, this optimization is not intuitive and this is why we switch to automatic transformation techniques. We are interested in the optimization of numerical programs relying on the IEEE754 oating-point arithmetic. In this article, our main contribution is to study the impact of optimizing the numerical accuracy of programs on the time required by numerical iterative methods to converge. To emphasize the usefulness of our tool, we make it optimize several examples of numerical methods such as Jacobi's method, Newton-Raphson's method, etc. We show that significant speedups are obtained in terms of number of iterations, time and ops
    • …
    corecore