225 research outputs found

    Numerical Accuracy Improvement by Interprocedural Program Transformation

    Get PDF
    Floating-point numbers are used to approximate the exact real numbers in a wide range of domains like numerical simulations, embedded software, etc. However, floating-point numbers are a finite approximation of real numbers. In practice, this approximation may introduce round-off errors and this can lead to catastrophic results. To cope with this issue, we have developed a tool which corrects partly these round-off errors and which consequently improves the numerical accuracy of computations by automatically transforming programs in a source to source manner. Our transformation, relies on static analysis by abstract interpretation and operates on pieces of code with assignments, conditionals and loops. In former work, we have focused on the intraprocedural transformation of programs and, in this article, we introduce the interprocedural transformation to improve accuracy

    Salsa: An Automatic Tool to Improve the Numerical Accuracy of Programs

    Get PDF
    This article describes Salsa, an automatic tool to improve the accuracy of the foating-point computations done in numerical codes. Based on static analysis methods by abstract interpretation, our tool takes as input an original program, applies to it a set of transformation rules and then generates a transformed program which is more accurate than the initial one. The original and the transformed programs are written in the same imperative language. This article is a concise description of former work on the techniques implemented in Salsa, extended with a presentation of the main software architecture, the inputs and outputs of the tool as well as experimental results obtained by applying our tool on a set of sample programs coming from embedded systems and numerical analysis

    Improving the Numerical Accuracy of High Performance Computing Programs by Process Specialization

    Get PDF
    In high performance computing, nearly all the implementations and published experiments use floating-point arithmetic. However, since floating-point numbers are finite approximations of real numbers, it may result in hazards because of the accumulated errors. These round-off errors may cause damages whose gravity varies depending on the critical level of the application. To deal with this issue, we have developed a tool which improves the numerical accuracy of computations by automatically transforming programs in a source-to-source manner. Our transformation, relies on static analysis by abstract interpretation and operates on pieces of code with assignments, conditionals, loops, functions and arrays. In this article, we apply our techniques to optimize a parallel program representative of the high performance computing domain. Parallelism introduces new numerical accuracy problems due to the order of operations in this kind of systems. We are also interested in studying the compromise between execution time and numerical accuracy

    On the Impact of Numerical Accuracy Optimization on General Performances of Programs

    Get PDF
    The floating-point numbers used in computer programs are a finite approximation of real numbers. In practice, this approximation may introduce round-off errors and this can lead to catastrophic results. In previous work, we have proposed intraprocedural and interprocedural program transformations for numerical accuracy optimization. All these transformations have been implemented in our tool, Salsa. The experimental results applied on various programs either coming from embedded systems or numerical methods, show the efficiency of the transformation in terms of numerical accuracy improvement but also in terms of other criteria such as execution time and code size. This article studies the impact of program transformations for numerical accuracy specially in embedded systems on other efficiency parameters such as execution time, code size and accuracy of the other variables (these which are not chosen for optimization)

    Affine Disjunctive Invariant Generation with Farkas' Lemma

    Full text link
    Invariant generation is the classical problem that aims at automated generation of assertions that over-approximates the set of reachable program states in a program. We consider the problem of generating affine invariants over affine while loops (i.e., loops with affine loop guards, conditional branches and assignment statements), and explore the automated generation of disjunctive affine invariants. Disjunctive invariants are an important class of invariants that capture disjunctive features in programs such as multiple phases, transitions between different modes, etc., and are typically more precise than conjunctive invariants over programs with these features. To generate tight affine invariants, existing constraint-solving approaches have investigated the application of Farkas' Lemma to conjunctive affine invariant generation, but none of them considers disjunctive affine invariants

    Computing Invariants with Transformers: Experimental Scalability and Accuracy

    Get PDF
    International audienceUsing abstract interpretation, invariants are usually obtained by solving iteratively a system of equations linking preconditions according to program statements. However, it is also possible to abstract first the statements as transformers, and then propagate the preconditions using the transformers. The second approach is modular because procedures and loops can be abstracted once and for all, avoiding an iterative resolution over the call graph and all the control flow graphs. However, the transformer approach based on polyhedral abstract domains encurs two penalties: some invariant accuracy may be lost when computing transformers, and the execution time may increase exponentially because the dimension of a transformer is twice the dimension of a precondition. The purposes of this article are 1) to measure the benefits of the modular approach and its drawbacks in terms of execution time and accuracy using significant examples and a newly developed benchmark for loop invariant analysis, ALICe, 2) to present a new technique designed to reduce the accuracy loss when computing transformers, 3) to evaluate experimentally the accuracy gains this new technique and other previously discussed ones provide with ALICe test cases and 4) to compare the executions times and accuracies of different tools, ASPIC, ISL, PAGAI and PIPS. Our results suggest that the transformer-based approach used in PIPS, once improved with transformer lists, is as accurate as the other tools when dealing with the ALICe benchmark. Its modularity nevertheless leads to shorter execution times when dealing with nested loops and procedure calls found in real applications

    Mira: A Framework for Static Performance Analysis

    Full text link
    The performance model of an application can pro- vide understanding about its runtime behavior on particular hardware. Such information can be analyzed by developers for performance tuning. However, model building and analyzing is frequently ignored during software development until perfor- mance problems arise because they require significant expertise and can involve many time-consuming application runs. In this paper, we propose a fast, accurate, flexible and user-friendly tool, Mira, for generating performance models by applying static program analysis, targeting scientific applications running on supercomputers. We parse both the source code and binary to estimate performance attributes with better accuracy than considering just source or just binary code. Because our analysis is static, the target program does not need to be executed on the target architecture, which enables users to perform analysis on available machines instead of conducting expensive exper- iments on potentially expensive resources. Moreover, statically generated models enable performance prediction on non-existent or unavailable architectures. In addition to flexibility, because model generation time is significantly reduced compared to dynamic analysis approaches, our method is suitable for rapid application performance analysis and improvement. We present several scientific application validation results to demonstrate the current capabilities of our approach on small benchmarks and a mini application

    Mixed Precision Tuning with Salsa

    Get PDF
    Precision tuning consists of finding the least floating-point formats enabling a program to compute some results with an accuracy requirement. In mixed precision, this problem has a huge combinatory since any value may have its own format. Precision tuning has given rise to the development of several tools that aim at guarantying a desired precision on the outputs of programs doing floating-point computations, by minimizing the initial, over-estimated, precision of the inputs and intermediary results. In this article, we present an extension of our tool for numerical accuracy, Salsa, which performs precision tuning. Originally, Salsa is a program transformation tool based on static analysis and which improves the accuracy of floating-point computations. We have extended Salsa with a precision tuning static analysis. We present experimental results showing the efficiency of this new feature as well as the additional gains that we obtain by performing Salsa’s program transformation before the precision tuning analysis. We experiment our tool on a set of programs coming from various domains like embedded systems and numerical analysis
    • …
    corecore