33 research outputs found

    Generating analyzers with PAG

    Get PDF
    To produce high qualitiy code, modern compilers use global optimization algorithms based on it abstract interpretation. These algorithms are rather complex; their implementation is therfore a non-trivial task and error-prone. However, since thez are based on a common theory, they have large similar parts. We conclude that analyzer writing better should be replaced with analyzer generation. We present the tool sf PAG that has a high level functional input language to specify data flow analyses. It offers th specifications of even recursive data structures and is therfore not limited to bit vector problems. sf PAG generates efficient analyzers wich can be easily integrated in existing compilers. The analyzers are interprocedural, they can handle recursive procedures with local variables and higher order functions. sf PAG has successfully been tested by generating several analyzers (e.g. alias analysis, constant propagation, inerval analysis) for an industrial quality ANSI-C and Fortran90 compiler. This technical report consits of two parts; the first introduces the generation system and the second evaluates generated analyzers with respect to their space and time consumption. bf Keywords: data flow analysis, specification and generation of analyzers, lattice specification, abstract syntax specification, interprocedural analysis, compiler construction

    Generating analyzers with PAG

    Get PDF
    To produce high qualitiy code, modern compilers use global optimization algorithms based on it abstract interpretation. These algorithms are rather complex; their implementation is therfore a non-trivial task and error-prone. However, since thez are based on a common theory, they have large similar parts. We conclude that analyzer writing better should be replaced with analyzer generation. We present the tool sf PAG that has a high level functional input language to specify data flow analyses. It offers th specifications of even recursive data structures and is therfore not limited to bit vector problems. sf PAG generates efficient analyzers wich can be easily integrated in existing compilers. The analyzers are interprocedural, they can handle recursive procedures with local variables and higher order functions. sf PAG has successfully been tested by generating several analyzers (e.g. alias analysis, constant propagation, inerval analysis) for an industrial quality ANSI-C and Fortran90 compiler. This technical report consits of two parts; the first introduces the generation system and the second evaluates generated analyzers with respect to their space and time consumption. bf Keywords: data flow analysis, specification and generation of analyzers, lattice specification, abstract syntax specification, interprocedural analysis, compiler construction

    Generating program analyzers

    Get PDF
    In this work the automatic generation of program analyzers from concise specifications is presented. It focuses on provably correct and complex interprocedural analyses for real world sized imperative programs. Thus, a powerful and flexible specification mechanism is required, enabling both correctness proofs and efficient implementations. The generation process relies on the theory of data flow analysis and on abstract interpretation. The theory of data flow analysis provides methods to efficiently implement analyses. Abstract interpretation provides the relation to the semantics of the programming language. This allows the systematic derivation of efficient provably correct, and terminating analyses. The approach has been implemented in the program analyzer generator PAG. It addresses analyses ranging from "simple\u27; intraprocedural bit vector frameworks to complex interprocedural alias analyses. A high level specialized functional language is used as specification mechanism enabling elegant and concise specifications even for complex analyses. Additionally, it allows the automatic selection of efficient implementations for the underlying abstract datatypes, such as balanced binary trees, binary decision diagrams, bit vectors, and arrays. For the interprocedural analysis the functional approach, the call string approach, and a novel approach especially targeting on the precise analysis of loops can be chosen. In this work the implementation of PAG as well as a large number of applications of PAG are presented.Diese Arbeit befaßt sich mit der automatischen Generierung von Programmanalysatoren aus prĂ€gnanten Spezifikationen. Dabei wird besonderer Wert auf die Generierung von beweisbar korrekten und komplexen interprozeduralen Analysen fĂŒr imperative Programme realer GrĂ¶ĂŸe gelegt. Um dies zu erreichen, ist ein leistungsfĂ€higer und flexibler Spezifikationsmechanismus erforderlich, der sowohl Korrektheitsbeweise, als auch effiziente Implementierungen ermöglicht. Die Generierung basiert auf den Theorien der Datenflußanalyse und der abstrakten Interpretation. Die Datenflußanalyse liefert Methoden zur effizienten Implementierung von Analysen. Die abstrakte Interpretation stellt den Bezug zur Semantik der Programmiersprache her und ermöglicht dadurch die systematische Ableitung beweisbar korrekter und terminierender Analysen. Dieser Ansatz wurde im Programmanalysatorgenerator PAG implementiert, der sowohl fĂŒr einfache intraprozedurale Bitvektor- Analysen, als auch fĂŒr komplexe interprozedurale Alias-Analysen geeignet ist. Als Spezifikationsmechanismus wird dabei eine spezialisierte funktionale Sprache verwendet, die es ermöglicht, auch komplexe Analysen kurz und prĂ€gnant zu spezifizieren. DarĂŒberhinaus ist es möglich, fĂŒr die zugrunde liegenden abstrakten Bereiche automatisch effiziente Implementierungen auszuwĂ€hlen, z.B. balancierte binĂ€re BĂ€ume, Binary Decision Diagrams, Bitvektoren oder Felder. FĂŒr die interprozedurale Analyse stehen folgende Möglichkeiten zur Auswahl: der funktionale Ansatz, der Call-String-Ansatz und ein neuer Ansatz, der besonders auf die prĂ€zise Analyse von Schleifen abzielt. Diese Arbeit beschreibt sowohl die Implementierung von PAG, als auch eine große Anzahl von Anwendungen

    Enforcing Termination of Interprocedural Analysis

    Full text link
    Interprocedural analysis by means of partial tabulation of summary functions may not terminate when the same procedure is analyzed for infinitely many abstract calling contexts or when the abstract domain has infinite strictly ascending chains. As a remedy, we present a novel local solver for general abstract equation systems, be they monotonic or not, and prove that this solver fails to terminate only when infinitely many variables are encountered. We clarify in which sense the computed results are sound. Moreover, we show that interprocedural analysis performed by this novel local solver, is guaranteed to terminate for all non-recursive programs --- irrespective of whether the complete lattice is infinite or has infinite strictly ascending or descending chains

    Verifying non-functional real-time properties by static analysis

    Get PDF
    International audienceStatic analyzers based on abstract interpretation are tools aiming at the automatic detection of run-time properties by analyzing the source, assembly or binary code of a program. From Airbus' point of view, the first interesting properties covered by static analyzers available on the market, or as prototypes coming from research, are absence of run-time errors, maximum stack usage and Worst-Case Execution Time (WCET). This paper will focus on the two latter

    A Domain-Specific Language for Generating Dataflow Analyzers

    Get PDF
    Dataflow analysis is a well-understood and very powerful technique for analyzing programs as part of the compilation process. Virtually all compilers use some sort of dataflow analysis as part of their optimization phase. However, despite being well-understood theoretically, such analyses are often difficult to code, making it difficult to quickly experiment with variants. To address this, we developed a domain-specific language, Analyzer Generator (AG), that synthesizes dataflow analysis phases for Microsoft's Phoenix compiler framework. AG hides the fussy details needed to make analyses modular, yet generates code that is as efficient as the hand-coded equivalent. One key construct we introduce allows IR object classes to be extended without recompiling. Experimental results on three analyses show that AG code can be one-tenth the size of the equivalent handwritten C++ code with no loss of performance. It is our hope that AG will make developing new dataflow analyses much easier

    Timing model derivation : static analysis of hardware description languages

    Get PDF
    Safety-critical hard real-time systems are subject to strict timing constraints. In order to derive guarantees on the timing behavior, the worst-case execution time (WCET) of each task comprising the system has to be known. The aiT tool has been developed for computing safe upper bounds on the WCET of a task. Its computation is mainly based on abstract interpretation of timing models of the processor and its periphery. These models are currently hand-crafted by human experts, which is a time-consuming and error-prone process. Modern processors are automatically synthesized from formal hardware specifications. Besides the processor’s functional behavior, also timing aspects are included in these descriptions. A methodology to derive sound timing models using hardware specifications is described within this thesis. To ease the process of timing model derivation, the methodology is embedded into a sound framework. A key part of this framework are static analyses on hardware specifications. This thesis presents an analysis framework that is build on the theory of abstract interpretation allowing use of classical program analyses on hardware description languages. Its suitability to automate parts of the derivation methodology is shown by different analyses. Practical experiments demonstrate the applicability of the approach to derive timing models. Also the soundness of the analyses and the analyses’ results is proved.Sicherheitskritische Echtzeitsysteme unterliegen strikten Zeitanforderungen. Um ihr Zeitverhalten zu garantieren mĂŒssen die AusfĂŒhrungszeiten der einzelnen Programme, die das System bilden, bekannt sein. Um sichere obere Schranken fĂŒr die AusfĂŒhrungszeit von Programmen zu berechnen wurde aiT entwickelt. Die Berechnung basiert auf abstrakter Interpretation von Zeitmodellen des Prozessors und seiner Peripherie. Diese Modelle werden hĂ€ndisch in einem zeitaufwendigen und fehleranfĂ€lligen Prozess von Experten entwickelt. Moderne Prozessoren werden automatisch aus formalen Spezifikationen erzeugt. Neben dem funktionalen Verhalten beschreiben diese auch das Zeitverhalten des Prozessors. In dieser Arbeit wird eine Methodik zur sicheren Ableitung von Zeitmodellen aus der Hardwarespezifikation beschrieben. Um den Ableitungsprozess zu vereinfachen ist diese Methodik in eine automatisierte Umgebung eingebettet. Ein Hauptbestandteil dieses Systems sind statische Analysen auf Hardwarebeschreibungen. Diese Arbeit stellt eine Analyse-Umgebung vor, die auf der Theorie der abstrakten Interpretation aufbaut und den Einsatz von klassischen Programmanalysen auf Hardwarebeschreibungssprachen erlaubt. Die Eignung des Systems, Teile der Ableitungsmethodik zu automatisieren, wird anhand einiger Analysen gezeigt. Experimentelle Ergebnisse zeigen die Anwendbarkeit der Methodik zur Ableitung von Zeitmodellen. Die Korrektheit der Analysen und der Analyse-Ergebnisse wird ebenfalls bewiesen

    Function pointer analysis for C programs

    Get PDF
    Function pointers are a feature of the C programming language whose use obscures the control flow of a program and makes programs hard to analyze. Existing pointer analyses are able to resolve function pointers, but lack the capabilities to precisely distinguish function pointer variables within complex data structures. The aim of this work is to develop a function pointer analysis which achieves this precision. It thereby allows a more precise analysis of programs with an intense usage of function pointers, as they are quite common in automotive software

    A demand-driven solver for constraint-based control flow analysis

    Get PDF
    This thesis develops a demand driven solver for constraint based control flow analysis. Our approach is modular, flow-sensitive and scaling. It allows to efficiently construct the interprocedural control flow graph (ICFG) for object-oriented languages. The analysis is based on the formal semantics of a Java-like language. It is proven to be correct with respect to this semantics. The base algorithms are given and we evaluate the applicability of our approach to real world programs. Construction of the ICFG is a key problem for the translation and optimization of object-oriented languages. The more accurate these graphs are, the more applicable, precise and faster are these analyses. While most present techniques are flow-insensitive, we present a flow-sensitive approach that is scalable. The analysis result is twofold. On the one hand, it allows to identify and delete uncallable methods, thus minimizing the program\u27;s footprint. This is especially important in the setting of embedded systems, where usually memory resources are quite expensive. On the other hand, the interprocedural control flow graph generated is much more precise than those generated with present techniques. This allows for increased accuracy when performing data flow analyses. Also this aspect is important for embedded systems, as more precise analyses allow the compiler to apply better optimizations, resulting in smaller and/or faster programs. Experimental results are given that demonstrate the applicability and scalability of the analysis.Diese Arbeit entwickelt einen Bedarf-gesteuerten Löser fĂŒr Constraint- basierte Kontrollflußanalyse. Unser Ansatz ist modular, fluß-sensitiv and skaliert. Er erlaubt das effiziente Konstruieren des interprozeduralen Kontrollflußgraphen fuer objektorientierte Programmiersprachen. Die Analyse basiert auf der formalen Semantik einer Java-Ă€hnlichen Sprache und wird als korrekt bezĂŒglich dieser Semantik bewiesen. Wir prĂ€sentieren die grundlegenden Algorithmen und belegen die Anwendbarkeit unseres Ansatzes auf realistische Programme. Die Konstruktion des interprozeduralen Kontrollflußgraphen ist ein SchlĂŒsselproblem bei der Übersetzung und Optimierung objekt- orientierter Programmiersprachen. Je genauer diese Graphen sind, desto prĂ€ziser und schneller sind darauf arbeitende Datenfluß-Analysen. WĂ€hrend die meisten heute verbreiteten Techniken fluß-insensitiv sind, prĂ€sentieren wir einen skalierbaren fluß-sensitiven Ansatz. Unsere Analyse hat zwei Hauptergebnisse. Einerseits erlaubt sie, nicht erreichbare Methoden zu identifizieren und zu löschen, wodurch die GrĂ¶ĂŸe des erzeugten Programmes reduziert wird. Dies ist besonders fĂŒr eingebettete Systeme wichtig, bei denen zusĂ€tzlicher Speicherplatz teuer ist. Andererseits ist der mit unserem Ansatz berechnete interprozedurale Kontrollflußgraph wesentlich genauer als der von derzeitigen Techniken berechnete Graph. Dieser prĂ€zisere Graph erlaubt eine grĂ¶ĂŸere Genauigkeit bei Datenflußanalysen. Auch dieser Aspekt ist fĂŒr eingebettete Systeme von großer Bedeutung, da prĂ€zisere Analysen bessere Optimierungen erlauben. Hierdurch wird das erzeugte Programm kleiner und/oder schneller. Experimentelle Ergebnisse belegen die Anwendbarkeit und Skalierbarkeit unserer Analyse

    Applying compiler technology to solve generic

    Get PDF
    Compilers are tools that transform a high level programming languages into assem- bly or binary code. The essential of the process is done by the interpretation and the code generation steps, but nowadays most compilers have also a strong component of code optimization, that explore as much as possible the potential of the computer architectures to which the compiler must generate the code. These optimizations are based on the information provided by several analysis processes. This paper present some of these code analysis and optimizations, and shows how they can be used to solve problems or improve the quality of solutions used at areas such as industrial engineer and planning
    corecore