14 research outputs found

    A Model-Derivation Framework for Software Analysis

    Full text link
    Model-based verification allows to express behavioral correctness conditions like the validity of execution states, boundaries of variables or timing at a high level of abstraction and affirm that they are satisfied by a software system. However, this requires expressive models which are difficult and cumbersome to create and maintain by hand. This paper presents a framework that automatically derives behavioral models from real-sized Java programs. Our framework builds on the EMF/ECore technology and provides a tool that creates an initial model from Java bytecode, as well as a series of transformations that simplify the model and eventually output a timed-automata model that can be processed by a model checker such as UPPAAL. The framework has the following properties: (1) consistency of models with software, (2) extensibility of the model derivation process, (3) scalability and (4) expressiveness of models. We report several case studies to validate how our framework satisfies these properties.Comment: In Proceedings MARS 2017, arXiv:1703.0581

    A Model-Derivation Framework for Software Analysis

    Get PDF
    Model-based verification allows to express behavioral correctness conditions like the validity of execution states, boundaries of variables or timing at a high level of abstraction and affirm that they are satisfied by a software system. However, this requires expressive models which are difficult and cumbersome to create and maintain by hand. This paper presents a framework that automatically derives behavioral models from real-sized Java programs. Our framework builds on the EMF/ECore technology and provides a tool that creates an initial model from Java bytecode, as well as a series of transformations that simplify the model and eventually output a timed-automata model that can be processed by a model checker such as UPPAAL. The framework has the following properties: (1) consistency of models with software, (2) extensibility of the model derivation process, (3) scalability and (4) expressiveness of models. We report several case studies to validate how our framework satisfies these properties.Comment: In Proceedings MARS 2017, arXiv:1703.0581

    Normalisation of Loops with Covariant Variables

    Get PDF
    AbstractTemporal property verification is utterly important to ensure safety of critical real-time systems. A main component of this verification is the computation of Worst Case Execution Time (WCET) that requires, in turn, the determination of loop bounds. Although a lot of efforts have been performed in this domain, it remains relatively common cases which are unsolved. For example, to our knowledge, no fast automatic method can cope with the loop bound of a simple binary search look-up. In this paper, we present an approach to solve such loops by using arithmetico-geometric series, that is, loops with arithmetic and/or geometric incrementation with several variables. We have implemented and experimented this approach in our tool oRange

    Symbolic and analytic techniques for resource analysis of Java bytecode

    Get PDF
    Recent work in resource analysis has translated the idea of amortised resource analysis to imperative languages using a program logic that allows mixing of assertions about heap shapes, in the tradition of separation logic, and assertions about consumable resources. Separately, polyhedral methods have been used to calculate bounds on numbers of iterations in loop-based programs. We are attempting to combine these ideas to deal with Java programs involving both data structures and loops, focusing on the bytecode level rather than on source code

    Improving WCET Evaluation using Linear Relation Analysis

    Get PDF
    International audienceThe precision of a worst case execution time (WCET) evaluation tool on a given program is highly dependent on how the tool is able to detect and discard semantically infeasible executions of the program. In this paper, we propose to use the classical abstract interpretation-based method of linear relation analysis to discover and exploit relations between execution paths. For this purpose, we add auxiliary variables (counters) to the program to trace its execution paths. The results are easily incorporated in the classical workflow of a WCET evaluator, when the evaluator is based on the popular implicit path enumeration technique. We use existing tools-a WCET evaluator and a linear relation analyzer-to build and experiment a prototype implementation of this idea. * This work is supported by the French research fundation (ANR) as part of the W-SEPT project (ANR-12-INSE-0001

    On Accelerating Source Code Analysis At Massive Scale

    Get PDF
    Encouraged by the success of data-driven software engineering (SE) techniques that have found numerous applications e.g. in defect prediction, specification inference, etc, the demand for mining and analyzing source code repositories at scale has significantly increased. However, analyzing source code at scale remains expensive to the extent that data-driven solutions to certain SE problems are beyond our reach today. Extant techniques have focussed on leveraging distributed computing to solve this problem, but with a concomitant increase in computational resource needs. This work proposes a technique that reduces the amount of computation performed by the ultra-large-scale source code mining task. Our key idea is to analyze the mining task to identify and remove the irrelevant portions of the source code, prior to running the mining task. We show a realization of our insight for mining and analyzing massive collections of control flow graphs of source codes. Our evaluation using 16 classical control-/data-flow analyses that are typical components of mining tasks and 7 Million CFGs shows that our technique can achieve on average 40% reduction in the task computation time. Our case studies demonstrates the applicability of our technique to massive scale source code mining tasks

    Novel solution for compiler infrastructure for embedded processors

    Get PDF
    Ова докторска теза описује и анализира приступ развоју Це компајлера за наменске процесоре. Такав компајлер захтева имплементацију посебних техника и алгоритама, претежно специфичних за нерегуларне процесорске архитектуре, да би генерисао ефикасан код, и при том је потребно да испуњава индустријске стандарде по питању робустности, разумљивости кода, могућности одржавања и проширивости. У ту сврху је предложена нова компајлерска инфраструктура над којом је имплементиран компајлер за Cirrus Coyote 32 ДСП. Квалитет генерисаног кода поређен је са квалитетом кода генерисног од стране већ постојећег компајлера за тај процесор. Уједно, одређени елементи организације компајлера су упоређени са популарним компајлерима отвореног кода GCC и LLVM.Ova doktorska teza opisuje i analizira pristup razvoju Ce kompajlera za namenske procesore. Takav kompajler zahteva implementaciju posebnih tehnika i algoritama, pretežno specifičnih za neregularne procesorske arhitekture, da bi generisao efikasan kod, i pri tom je potrebno da ispunjava industrijske standarde po pitanju robustnosti, razumljivosti koda, mogućnosti održavanja i proširivosti. U tu svrhu je predložena nova kompajlerska infrastruktura nad kojom je implementiran kompajler za Cirrus Coyote 32 DSP. Kvalitet generisanog koda poređen je sa kvalitetom koda generisnog od strane već postojećeg kompajlera za taj procesor. Ujedno, određeni elementi organizacije kompajlera su upoređeni sa popularnim kompajlerima otvorenog koda GCC i LLVM.This PhD thesis describes and analyses an approach to development of C language compiler for embedded processors. That kind of compiler requires implementation of special techniques and algorithms, mostly specific for irregular processor architectures, in order to be able to generate efficient code, whereas still meeting industrial strength standard by beeing robust, understandable, maintainable, and extensible. For this purpose the new compiler insfrastructure is proposed and on top of it a compiler for Cirrus Logic Coyote 32 DSP is built. Quality of the code generated by that compiler is compared with code generated by the previous compiler for the same processor architecture. Some elements of the compiler design are also compared to popular open source compilers GCC and LLVM

    Human-centric verification for software safety and security

    Get PDF
    Software forms a critical part of our lives today. Verifying software to avoid violations of safety and security properties is a necessary task. It is also imperative to have an assurance that the verification process was correct. We propose a human-centric approach to software verification. This involves enabling human-machine collaboration to detect vulnerabilities and to prove the correctness of the verification. We discuss two classes of vulnerabilities. The first class is Algorithmic Complexity Vulnerabilities (ACV). ACVs are a class of software security vulnerabilities that cause denial-of-service attacks. The description of an ACV is not known a priori. The problem is equivalent to searching for a needle in the haystack when we don\u27t know what the needle looks like. We present a novel approach to detect ACVs in web applications. We present a case study audit from DARPA\u27s Space/Time Analysis for Cybersecurity (STAC) program to illustrate our approach. The second class of vulnerabilities is Memory Leaks. Although the description of the Memory Leak (ML) problem is known, a proof of the correctness of the verification is needed to establish trust in the results. We present an approach inspired by the works of Alan Perlis to compute evidence of the verification which can be scrutinized by a human to prove the correctness of the verification. We present a novel abstraction, the Evidence Graph, that succinctly captures the verification evidence and show how to compute the evidence. We evaluate our approach against ML instances in the Linux kernel and report improvement over the state-of-the-art results. We also present two case studies to illustrate how the Evidence Graph can be used to prove the correctness of the verification

    Collective program analysis

    Get PDF
    Encouraged by the success of data-driven software engineering (SE) techniques that have found numerous applications e.g. in defect prediction, specification inference, etc, the demand for mining and analyzing source code repositories at scale has significantly increased. However, analyzing source code at scale remains expensive to the extent that data-driven solutions to certain SE problems are beyond our reach today. Extant techniques have focused on leveraging distributed computing to solve this problem, but with a concomitant increase in computational resource needs. In this thesis, we propose collective program analysis (CPA), a technique to accelerate ultra-large-scale source code mining without demanding more computational resources and by utilizing the similarity between millions of source code artifacts. First, we describe the general concept of collective program analysis. Given a mining task that is required to be run on thousands of artifacts, the artifacts with similar interactions are clustered together, such that the mining task is required to be run on only one candidate from each cluster to produce the mining result and the results for other candidates in the same cluster can be produced using extrapolation. The two technical innovations of collective program analysis are: mining task specific similarity and interaction pattern graph. Mining task specific similarity is about whether two or more artifacts can be considered similar for a given mining task. An interaction pattern graph represents the interaction between the mining task and the artifact when the mining task is run on the artifact. An interaction pattern graph is used to determine mining task specific similarity between artifacts. Given a mining task and an artifact producing an interaction pattern graph soundly and efficiently can be very challenging. We propose a pre-analysis and program compaction technique to achieve this. Given a source code mining task and thousands of input programs on which the mining task needs to be run, our technique first extracts the information about what parts of an input program are relevant for the mining task and then removes the irrelevant parts from input programs, prior to running the mining task on them. Our key technical contributions are a static analysis to extract information about the parts of program that are relevant for a mining task and a sound program compaction technique that produces a reduced program on which the mining task has similar output as original program. Upon producing interaction pattern graphs of thousands of artifacts, they have to be clustered and the mining task results have to be reused between similar artifacts to achieve acceleration. In the final part of this thesis, we fully describes collective program analysis and illustrate mining millions of control flow graphs (CFGs) by clustering similar CFGs
    corecore