15 research outputs found

    Analyzing the Implicit Computational Complexity of object-oriented programs

    Get PDF
    A sup-interpretation is a tool which provides upper bounds on the size of the values computed by the function symbols of a program. Sup-interpretations have shown their interest to deal with the complexity of first order functional programs. This paper is an attempt to adapt the framework of sup-interpretations to a fragment of object-oriented programs, including loop and while constructs and methods with side effects. We give a criterion, called brotherly criterion, which uses the notion of sup-interpretation to ensure that each brotherly program computes objects whose size is polynomially bounded by the inputs sizes. Moreover we give some heuristics in order to compute the sup-interpretation of a given method

    Expression-based aliasing for OO-languages

    Full text link
    Alias analysis has been an interesting research topic in verification and optimization of programs. The undecidability of determining whether two expressions in a program may reference to the same object is the main source of the challenges raised in alias analysis. In this paper we propose an extension of a previously introduced alias calculus based on program expressions, to the setting of unbounded program executions s.a. infinite loops and recursive calls. Moreover, we devise a corresponding executable specification in the K-framework. An important property of our extension is that, in a non-concurrent setting, the corresponding alias expressions can be over-approximated in terms of a notion of regular expressions. This further enables us to show that the associated K-machinery implements an algorithm that always stops and provides a sound over-approximation of the "may aliasing" information, where soundness stands for the lack of false negatives. As a case study, we analyze the integration and further applications of the alias calculus in SCOOP. The latter is an object-oriented programming model for concurrency, recently formalized in Maude; K-definitions can be compiled into Maude for execution

    Thread-Modular Static Analysis for Relaxed Memory Models

    Full text link
    We propose a memory-model-aware static program analysis method for accurately analyzing the behavior of concurrent software running on processors with weak consistency models such as x86-TSO, SPARC-PSO, and SPARC-RMO. At the center of our method is a unified framework for deciding the feasibility of inter-thread interferences to avoid propagating spurious data flows during static analysis and thus boost the performance of the static analyzer. We formulate the checking of interference feasibility as a set of Datalog rules which are both efficiently solvable and general enough to capture a range of hardware-level memory models. Compared to existing techniques, our method can significantly reduce the number of bogus alarms as well as unsound proofs. We implemented the method and evaluated it on a large set of multithreaded C programs. Our experiments showthe method significantly outperforms state-of-the-art techniques in terms of accuracy with only moderate run-time overhead.Comment: revised version of the ESEC/FSE 2017 pape

    From Object Fields to Local Variables: A Practical Approach to Field-Sensitive Analysis

    Get PDF
    Static analysis which takes into account the value of data stored in the heap is typically considered complex and computationally intractable in practice. Thus, most static analyzers do not keep track of object fields (or fields for short), i.e., they are field-insensitive. In this paper, we propose locality conditions for soundly converting fields into local variables. This way, field-insensitive analysis over the transformed program can infer information on the original fields. Our notion of locality is context-sensitive and can be applied both to numeric and reference fields. We propose then a polyvariant transformation which actually converts object fields meeting the locality condition into variables and which is able to generate multiple versions of code when this leads to increasing the amount of fields which satisfy the locality conditions. We have implemented our analysis within a termination analyzer for Java bytecode

    Field-Sensitive Value Analysis by Field-Insensitive Analysis

    Get PDF
    Shared and mutable data-structures pose major problems in static analysis and most analyzers are unable to keep track of the values of numeric variables stored in the heap. In this paper, we first identify sufficient conditions under which heap allocated numeric variables in object oriented programs (i.e., numeric fields) can be handled as non-heap allocated variables. Then, we present a static analysis to infer which numeric fields satisfy these conditions at the level of (sequential) bytecode. This allows instrumenting the code with ghost variables which make such numeric fields observable to any field-insensitive value analysis. Our experimental results in termination analysis show that we greatly enlarge the class of analyzable programs with a reasonable overhea

    Generic Combination of Heap and Value Analyses in Abstract Interpretation

    Full text link
    Abstract. Abstract interpretation has been widely applied to approx-imate data structures and (usually numerical) value information. One needs to combine them to effectively apply static analysis to real software. Nevertheless, they have been studied mainly as orthogonal problems so far. In this context, we introduce a generic framework that, given a heap and a value analysis, combines them, and we formally prove its soundness. The heap analysis approximates concrete locations with heap identifiers, that can be materialized or merged. Meanwhile, the value analysis tracks information both on variable and heap identifiers, taking into account when heap identifiers are merged or materialized. We show how existing pointer and shape analyses, as well as numerical domains, can be plugged in our framework. As far as we know, this is the first sound generic automatic framework combining heap and value analyses that allows to freely manage heap identifiers.

    Sharing Ghost Variables in a Collection of Abstract Domains

    Get PDF
    International audienceWe propose a framework in which we share ghost variables across a collection of abstract domains allowing precise proofs of complex properties. In abstract interpretation, it is often necessary to be able to express complex properties while doing a precise analysis. A way to achieve that is to combine a collection of domains, each handling some kind of properties, using a reduced product. Separating domains allows an easier and more modular implementation, and eases soundness and termination proofs. This way, we can add a domain for any kind of property that is interesting. The reduced product, or an approximation of it, is in charge of refining abstract states, making the analysis precise. In program verification, ghost variables can be used to ease proofs of properties by storing intermediate values that do not appear directly in the execution. We propose a reduced product of abstract domains that allows domains to use ghost variables to ease the representation of their internal state. Domains must be totally agnostic with respect to other existing domains. In particular the handling of ghost variables must be entirely decentralized while still ensuring soundness and termination of the analysis
    corecore