4,137 research outputs found

    Live Heap Space Analysis for Languages with Garbage Collection

    Get PDF
    The peak heap consumption of a program is the maximum size of the live data on the heap during the execution of the program, i.e., the minimum amount of heap space needed to run the program without exhausting the memory. It is well-known that garbage collection (GC) makes the problem of predicting the memory required to run a program difïŹcult. This paper presents, the best of our knowledge, the ïŹrst live heap space analysis for garbage-collected languages which infers accurate upper bounds on the peak heap usage of a program’s execution that are not restricted to any complexity class, i.e., we can infer exponential, logarithmic, polynomial, etc., bounds. Our analysis is developed for an (sequential) object-oriented bytecode language with a scoped-memory manager that reclaims unreachable memory when methods return. We also show how our analysis can accommodate other GC schemes which are closer to the ideal GC which collects objects as soon as they become unreachable. The practicality of our approach is experimentally evaluated on a prototype implementation.We demonstrate that it is fully automatic, reasonably accurate and efïŹcient by inferring live heap space bounds for a standardized set of benchmarks, the JOlden suite

    Heap Abstractions for Static Analysis

    Full text link
    Heap data is potentially unbounded and seemingly arbitrary. As a consequence, unlike stack and static memory, heap memory cannot be abstracted directly in terms of a fixed set of source variable names appearing in the program being analysed. This makes it an interesting topic of study and there is an abundance of literature employing heap abstractions. Although most studies have addressed similar concerns, their formulations and formalisms often seem dissimilar and some times even unrelated. Thus, the insights gained in one description of heap abstraction may not directly carry over to some other description. This survey is a result of our quest for a unifying theme in the existing descriptions of heap abstractions. In particular, our interest lies in the abstractions and not in the algorithms that construct them. In our search of a unified theme, we view a heap abstraction as consisting of two features: a heap model to represent the heap memory and a summarization technique for bounding the heap representation. We classify the models as storeless, store based, and hybrid. We describe various summarization techniques based on k-limiting, allocation sites, patterns, variables, other generic instrumentation predicates, and higher-order logics. This approach allows us to compare the insights of a large number of seemingly dissimilar heap abstractions and also paves way for creating new abstractions by mix-and-match of models and summarization techniques.Comment: 49 pages, 20 figure

    Static Analysis for Ruby in the Presence of Gradual Typing

    Get PDF
    Dynamic languages provide new challenges to traditional static analysis techniques, leaving most errors to be detected at runtime and making many properties of code difficult to infer. Ruby code usually takes advantage of both dynamic typing and metaprogramming to produce elegant yet difficult-to-analyze programs. Function evalpq and its variants, which usually foil static analysis, are used frequently as a primitive runtime macro system. The goal of this thesis is to answer the question: What useful information about real-world Ruby programs can be determined statically with a high degree of accuracy? Two observations lead to a number of statically-discoverable errors and properties in parseable Ruby programs. The first is that many interesting properties of a program can be discovered through traditional static analysis techniques despite the presence of dynamic typing. The second is that most metaprogramming occurs when the program files are loaded and not during the execution of the main program. Traditional techniques, such as flow analysis and Static Single Assignment transformations aid extraction of program invariants, including both explicitly programmed constants and those implicitly defined by Ruby\u27s semantics. A meaningful, well-defined distinction between load time and run time in Ruby is developed and addresses the second observation. This distinction allows us to statically discern properties of a Ruby program despite many idioms that require dynamic evaluation of code. Lastly, gradual typing through optional annotations improves the quality of error discovery and other statically-inferred properties

    Parametric Inference of Memory Requirements for Garbage Collected Languages

    Get PDF
    The accurate prediction of program's memory requirements is a critical component in software development. Existing heap space analyses either do not take deallocation into account or adopt specific models of garbage collectors which do not necessarily correspond to the actual memory usage. We present a novel approach to inferring upper bounds on memory requirements of Java-like programs which is parametric on the notion of object lifetime, i.e., on when objects become collectible. If objects lifetimes are inferred by a reachability analysis, then our analysis infers accurate upper bounds on the memory consumption for a reachability-based garbage collector. Interestingly, if objects lifetimes are inferred by a heap liveness analysis, then we approximate the program minimal memory requirement, i.e., the peak memory usage when using an optimal garbage collector which frees objects as soon as they become dead. The key idea is to integrate information on objects lifetimes into the process of generating the recurrence equations which capture the memory usage at the different program states. If the heap size limit is set to the memory requirement inferred by our analysis, it is ensured that execution will not exceed the memory limit with the only assumption that garbage collection works when the limit is reached. Experiments on Java bytecode programs provide evidence of the feasibility and accuracy of our analysis

    Predicate Abstraction with Under-approximation Refinement

    Full text link
    We propose an abstraction-based model checking method which relies on refinement of an under-approximation of the feasible behaviors of the system under analysis. The method preserves errors to safety properties, since all analyzed behaviors are feasible by definition. The method does not require an abstract transition relation to be generated, but instead executes the concrete transitions while storing abstract versions of the concrete states, as specified by a set of abstraction predicates. For each explored transition the method checks, with the help of a theorem prover, whether there is any loss of precision introduced by abstraction. The results of these checks are used to decide termination or to refine the abstraction by generating new abstraction predicates. If the (possibly infinite) concrete system under analysis has a finite bisimulation quotient, then the method is guaranteed to eventually explore an equivalent finite bisimilar structure. We illustrate the application of the approach for checking concurrent programs.Comment: 22 pages, 3 figures, accepted for publication in Logical Methods in Computer Science journal (special issue CAV 2005

    Observation and abstract behaviour in specification and implementation of state-based systems

    Get PDF
    Classical algebraic specification is an accepted framework for specification. A criticism which applies is the fact that it is functional, not based on a notion of state as most software development and implementation languages are. We formalise the idea of a state-based object or abstract machine using algebraic means. In contrast to similar approaches we consider dynamic logic instead of equational logic as the framework for specification and implementation. The advantage is a more expressive language allowing us to specify safety and liveness conditions. It also allows a clearer distinction of functional and state-based parts which require different treatment in order to achieve behavioural abstraction when necessary. We shall in particular focus on abstract behaviour and observation. A behavioural notion of satisfaction for state-elements is needed in order to abstract from irrelevant details of the state realisation

    WEB service interfaces for inter-organisational business processes an infrastructure for automated reconciliation

    Get PDF
    For the majority of front-end e-business systems, the assumption of a coherent and homogeneous set of interfaces is highly unrealistic. Problems start in the back-end, with systems characterised by a heterogeneous mix of applications and business processes. Integration can be complex and expensive, as systems evolve more in accordance with business needs than with technical architectures. E-business systems are faced with the challenge to give a coherent image of a diversified reality. Web services make business interfaces more efficient, but effectiveness is a business requirement of at least comparable importance. We propose a technique for automatic reconciliation of the Web service interfaces involved in inter-organisational business processes. The working assumption is that the Web service front-end of each company is represented by a set of WSDL and WSCL interfaces. The result of our reconciliation method is a common interface that all the parties can effectively enforce. Indications are also given on ways to adapt individual interfaces to the common one. The technique was embodied in a prototype that we also present

    A Top-Down Approach to Managing Variability in Robotics Algorithms

    Full text link
    One of the defining features of the field of robotics is its breadth and heterogeneity. Unfortunately, despite the availability of several robotics middleware services, robotics software still fails to smoothly handle at least two kinds of variability: algorithmic variability and lower-level variability. The consequence is that implementations of algorithms are hard to understand and impacted by changes to lower-level details such as the choice or configuration of sensors or actuators. Moreover, when several algorithms or algorithmic variants are available it is difficult to compare and combine them. In order to alleviate these problems we propose a top-down approach to express and implement robotics algorithms and families of algorithms so that they are both less dependent on lower-level details and easier to understand and combine. This approach goes top-down from the algorithms and shields them from lower-level details by introducing very high level abstractions atop the intermediate abstractions of robotics middleware. This approach is illustrated on 7 variants of the Bug family that were implemented using both laser and infra-red sensors.Comment: 6 pages, 5 figures, Presented at DSLRob 2013 (arXiv:cs/1312.5952
    • 

    corecore