18 research outputs found

    Abstract Acceleration in Linear relation analysis (extended version)

    Get PDF
    Linear relation analysis is a classical abstract interpretation based on an over-approximation of reachable numerical states of a program by convex polyhedra. Since it works with a lattice of infinite height, it makes use of a widening operator to enforce the convergence of fixed point computations. Abstract acceleration is a method that computes the precise abstract effect of loops wherever possible and uses widening in the general case. Thus, it improves both the precision and the efficiency of the analysis. This research report gives a comprehensive tutorial on abstract acceleration: its origins in Presburger-based acceleration including new insights w.r.t. the linear accelerability of linear transformations, methods for simple and nested loops, recent extensions, tools and applications, and a detailed discussion of related methods and future perspectives. This is the long version of a paper under submission

    ALICe: A Framework to Improve Affine Loop Invariant Computation

    No full text
    International audienceA crucial point in program analysis is the computation of loop invariants. Accurate invariants are required to prove properties on a program but they are difficult to compute. Extensive research has been carried out but, to the best of our knowledge, no benchmark has ever been developed to compare algorithms and tools. We present ALICe, a toolset to compare automatic computation techniques of affine loop scalar invariants. It comes with a benchmark that we built using 102 test cases which we found in the loop invariant bibliography, and interfaces with three analysis programs, that rely on different techniques: Aspic, ISL and PIPS. Conversion tools are provided to handle format heterogeneity of these programs. Experimental results show the importance of model coding and the poor performances of PIPS on concurrent loops. To tackle these issues, we use two model restructurations techniques whose correctness is proved in Coq, and discuss the improvements realized

    Computing Invariants with Transformers: Experimental Scalability and Accuracy

    Get PDF
    International audienceUsing abstract interpretation, invariants are usually obtained by solving iteratively a system of equations linking preconditions according to program statements. However, it is also possible to abstract first the statements as transformers, and then propagate the preconditions using the transformers. The second approach is modular because procedures and loops can be abstracted once and for all, avoiding an iterative resolution over the call graph and all the control flow graphs. However, the transformer approach based on polyhedral abstract domains encurs two penalties: some invariant accuracy may be lost when computing transformers, and the execution time may increase exponentially because the dimension of a transformer is twice the dimension of a precondition. The purposes of this article are 1) to measure the benefits of the modular approach and its drawbacks in terms of execution time and accuracy using significant examples and a newly developed benchmark for loop invariant analysis, ALICe, 2) to present a new technique designed to reduce the accuracy loss when computing transformers, 3) to evaluate experimentally the accuracy gains this new technique and other previously discussed ones provide with ALICe test cases and 4) to compare the executions times and accuracies of different tools, ASPIC, ISL, PAGAI and PIPS. Our results suggest that the transformer-based approach used in PIPS, once improved with transformer lists, is as accurate as the other tools when dealing with the ALICe benchmark. Its modularity nevertheless leads to shorter execution times when dealing with nested loops and procedure calls found in real applications

    Modular termination of C programs

    Get PDF
    In this paper we describe a general method to prove termination of C programs in a scalable and modular way. The program to analyse is reduced to the smallest relevant subset through a termination-specific slicing technique. Then, the program is divided into pieces of code that are analysed separately, thanks to an external engine for termination. The result is implemented in the prototype \stoptool over our previous toolsuite WTC [compsys-termination-sas10] and preliminary results shows the feasibility of the method

    Differential cost analysis with simultaneous potentials and anti-potentials

    Get PDF
    We present a novel approach to differential cost analysis that, given a program revision, attempts to statically bound the difference in resource usage, or cost, between the two program versions. Differential cost analysis is particularly interesting because of the many compelling applications for it, such as detecting resource-use regressions at code-review time or proving the absence of certain side-channel vulnerabilities. One prior approach to differential cost analysis is to apply relational reasoning that conceptually constructs a product program on which one can over-approximate the difference in costs between the two program versions. However, a significant challenge in any relational approach is effectively aligning the program versions to get precise results. In this paper, our key insight is that we can avoid the need for and the limitations of program alignment if, instead, we bound the difference of two cost-bound summaries rather than directly bounding the concrete cost difference. In particular, our method computes a threshold value for the maximal difference in cost between two program versions simultaneously using two kinds of cost-bound summaries---a potential function that evaluates to an upper bound for the cost incurred in the first program and an anti-potential function that evaluates to a lower bound for the cost incurred in the second. Our method has a number of desirable properties: it can be fully automated, it allows optimizing the threshold value on relative cost, it is suitable for programs that are not syntactically similar, and it supports non-determinism. We have evaluated an implementation of our approach on a number of program pairs collected from the literature, and we find that our method computes tight threshold values on relative cost in most example

    Differential cost analysis with simultaneous potentials and anti-potentials

    Get PDF
    We present a novel approach to differential cost analysis that, given a program revision, attempts to statically bound the difference in resource usage, or cost, between the two program versions. Differential cost analysis is particularly interesting because of the many compelling applications for it, such as detecting resource-use regressions at code-review time or proving the absence of certain side-channel vulnerabilities. One prior approach to differential cost analysis is to apply relational reasoning that conceptually constructs a product program on which one can over-approximate the difference in costs between the two program versions. However, a significant challenge in any relational approach is effectively aligning the program versions to get precise results. In this paper, our key insight is that we can avoid the need for and the limitations of program alignment if, instead, we bound the difference of two cost-bound summaries rather than directly bounding the concrete cost difference. In particular, our method computes a threshold value for the maximal difference in cost between two program versions simultaneously using two kinds of cost-bound summaries---a potential function that evaluates to an upper bound for the cost incurred in the first program and an anti-potential function that evaluates to a lower bound for the cost incurred in the second. Our method has a number of desirable properties: it can be fully automated, it allows optimizing the threshold value on relative cost, it is suitable for programs that are not syntactically similar, and it supports non-determinism. We have evaluated an implementation of our approach on a number of program pairs collected from the literature, and we find that our method computes tight threshold values on relative cost in most examples

    A Study of Transitive Closure based Loop Transformation Techniques of Affine Integer Relations for Coarse Grain Parallelism

    Get PDF
    This thesis focuses on computation of transitive closure of affine integer tuple relations and its effect on improvement on run time of the resultant parallelized programs. Scalability issues of the computation are also discussed. Different strategies are used by automatic parallelization compilers to find statements that can be executed in parallel. Most of the current approaches like Pluto [1] and Polly [2] use linear/integer- linear programming based techniques as a means to do the same. An emerging alternative is to use the transitive closure to do the same. The transitive closure based methods are different in strategy and complexity with compared with methods that use the linear programming based approaches. Traco [3] is a source to source transformation tool which tries to find slices of program that can be executed in parallel using Transitive Closure. Polly [2] is a branch of LLVM which uses scanning of AST to obtain independent dimension of iteration vector. Both Traco and Polly use OpenMP [4] pragmas to show detected parallelism. We do a comparative study of Traco and Polly to extract coarse grained parallelization. We suggest important modifications to Polly’s algorithm of dependence extraction. We show limitations of the Traco compiler on various fronts: limitations in extracting parallelism, scalability because of dependence on transitive closure etc

    Using Bounded Model Checking to Focus Fixpoint Iterations

    Get PDF
    Two classical sources of imprecision in static analysis by abstract interpretation are widening and merge operations. Merge operations can be done away by distinguishing paths, as in trace partitioning, at the expense of enumerating an exponential number of paths. In this article, we describe how to avoid such systematic exploration by focusing on a single path at a time, designated by SMT-solving. Our method combines well with acceleration techniques, thus doing away with widenings as well in some cases. We illustrate it over the well-known domain of convex polyhedra

    Analyse statique des systèmes de contrôle-commande : invariants entiers et flottants

    Get PDF
    A critical software is a software whose malfunction may result in death or serious injury to people, loss or severe damage to equipment or environmental harm.Software engineering for critical systems is particularly difficult, and combines different methods to ensure the quality of produced software.Among them, formal methods can be used to prove that a software obeys its specifications.This thesis falls within the context of the validation of safety properties for critical software, and more specifically, of numerical properties for embedded software in control-command systems.The first part of this thesis deals with Lyapunov stability proofs.These proofs rely on computations with real numbers, and do not accurately describe the behavior of a program run on a platform with machine arithmetic.We introduce a generic, theoretical framework to adapt the arguments of Lyapunov stability proofs to machine arithmetic.A tool automatically translates the proof on real numbers to a proof with floating-point numbers.The second part of the thesis focuses on linear relation analysis, using an abstract interpretation based on the approximation by convex polyhedrons of valuations associated with each control point in a program.We present ALICe, a framework to compare different invariant generation techniques.It comes with a collection of test cases taken from the program analysis literature, and interfaces with three tools, that rely on different algorithms to compute invariants: Aspic, iscc and PIPS.To refine PIPS results, two code restructuring techniques are introduced, and several improvements are made to the invariant generation algorithms and evaluated using ALICe.Un logiciel critique est un logiciel dont le mauvais fonctionnement peut avoir un impact important sur la sécurité ou la vie des personnes, des entreprises ou des biens.L'ingénierie logicielle pour les systèmes critiques est particulièrement difficile et combine différentes méthodes pour garantir la qualité des logiciels produits.Parmi celles-ci, les méthodes formelles peuvent être utilisées pour prouver qu'un logiciel respecte ses spécifications.Le travail décrit dans cette thèse s'inscrit dans le contexte de la validation de propriétés de sûreté de programmes critiques, et plus particulièrement des propriétés numériques de logiciels embarqués dans des systèmes de contrôle-commande.La première partie de cette thèse est consacrée aux preuves de stabilité au sens de Lyapunov.Ces preuves s'appuient sur des calculs en nombres réels, et ne sont pas valables pour décrire le comportement d'un programme exécuté sur une plateforme à arithmétique machine.Nous présentons un cadre théorique générique pour adapter les arguments des preuves de stabilité de Lyapunov aux arithmétiques machine.Un outil effectue automatiquement la traduction de la preuve en nombres réels vers une preuve en nombres a virgule flottante.La seconde partie de la thèse porte sur l'analyse des relations affines, en utilisant une interprétation abstraite basée sur l'approximation des valuations associées aux points de contrôle d'un programme par des polyèdres convexes.Nous présentons ALICe, un framework permettant de comparer différentes techniques de génération d'invariants.Il s'accompagne d'une collection de cas de tests tirés de publications sur l'analyse de programmes, et s'interface avec trois outils utilisant différents algorithmes de calcul d'invariants: Aspic, iscc et PIPS.Afin d'affiner les résultats de PIPS, deux techniques de restructuration de code sont introduites, et plusieurs améliorations sont apportées aux algorithmes de génération d'invariants et évaluées à l'aide d'ALICe
    corecore