7,582 research outputs found

    Liveness-Based Garbage Collection for Lazy Languages

    Full text link
    We consider the problem of reducing the memory required to run lazy first-order functional programs. Our approach is to analyze programs for liveness of heap-allocated data. The result of the analysis is used to preserve only live data---a subset of reachable data---during garbage collection. The result is an increase in the garbage reclaimed and a reduction in the peak memory requirement of programs. While this technique has already been shown to yield benefits for eager first-order languages, the lack of a statically determinable execution order and the presence of closures pose new challenges for lazy languages. These require changes both in the liveness analysis itself and in the design of the garbage collector. To show the effectiveness of our method, we implemented a copying collector that uses the results of the liveness analysis to preserve live objects, both evaluated (i.e., in WHNF) and closures. Our experiments confirm that for programs running with a liveness-based garbage collector, there is a significant decrease in peak memory requirements. In addition, a sizable reduction in the number of collections ensures that in spite of using a more complex garbage collector, the execution times of programs running with liveness and reachability-based collectors remain comparable

    An Axiomatic Approach to Liveness for Differential Equations

    Full text link
    This paper presents an approach for deductive liveness verification for ordinary differential equations (ODEs) with differential dynamic logic. Numerous subtleties complicate the generalization of well-known discrete liveness verification techniques, such as loop variants, to the continuous setting. For example, ODE solutions may blow up in finite time or their progress towards the goal may converge to zero. Our approach handles these subtleties by successively refining ODE liveness properties using ODE invariance properties which have a well-understood deductive proof theory. This approach is widely applicable: we survey several liveness arguments in the literature and derive them all as special instances of our axiomatic refinement approach. We also correct several soundness errors in the surveyed arguments, which further highlights the subtlety of ODE liveness reasoning and the utility of our deductive approach. The library of common refinement steps identified through our approach enables both the sound development and justification of new ODE liveness proof rules from our axioms.Comment: FM 2019: 23rd International Symposium on Formal Methods, Porto, Portugal, October 9-11, 201

    Lower Bounds for Symbolic Computation on Graphs: Strongly Connected Components, Liveness, Safety, and Diameter

    Full text link
    A model of computation that is widely used in the formal analysis of reactive systems is symbolic algorithms. In this model the access to the input graph is restricted to consist of symbolic operations, which are expensive in comparison to the standard RAM operations. We give lower bounds on the number of symbolic operations for basic graph problems such as the computation of the strongly connected components and of the approximate diameter as well as for fundamental problems in model checking such as safety, liveness, and co-liveness. Our lower bounds are linear in the number of vertices of the graph, even for constant-diameter graphs. For none of these problems lower bounds on the number of symbolic operations were known before. The lower bounds show an interesting separation of these problems from the reachability problem, which can be solved with O(D)O(D) symbolic operations, where DD is the diameter of the graph. Additionally we present an approximation algorithm for the graph diameter which requires O~(nD)\tilde{O}(n \sqrt{D}) symbolic steps to achieve a (1+ϵ)(1+\epsilon)-approximation for any constant ϵ>0\epsilon > 0. This compares to O(n⋅D)O(n \cdot D) symbolic steps for the (naive) exact algorithm and O(D)O(D) symbolic steps for a 2-approximation. Finally we also give a refined analysis of the strongly connected components algorithms of Gentilini et al., showing that it uses an optimal number of symbolic steps that is proportional to the sum of the diameters of the strongly connected components

    Heap Reference Analysis Using Access Graphs

    Full text link
    Despite significant progress in the theory and practice of program analysis, analysing properties of heap data has not reached the same level of maturity as the analysis of static and stack data. The spatial and temporal structure of stack and static data is well understood while that of heap data seems arbitrary and is unbounded. We devise bounded representations which summarize properties of the heap data. This summarization is based on the structure of the program which manipulates the heap. The resulting summary representations are certain kinds of graphs called access graphs. The boundedness of these representations and the monotonicity of the operations to manipulate them make it possible to compute them through data flow analysis. An important application which benefits from heap reference analysis is garbage collection, where currently liveness is conservatively approximated by reachability from program variables. As a consequence, current garbage collectors leave a lot of garbage uncollected, a fact which has been confirmed by several empirical studies. We propose the first ever end-to-end static analysis to distinguish live objects from reachable objects. We use this information to make dead objects unreachable by modifying the program. This application is interesting because it requires discovering data flow information representing complex semantics. In particular, we discover four properties of heap data: liveness, aliasing, availability, and anticipability. Together, they cover all combinations of directions of analysis (i.e. forward and backward) and confluence of information (i.e. union and intersection). Our analysis can also be used for plugging memory leaks in C/C++ languages.Comment: Accepted for printing by ACM TOPLAS. This version incorporates referees' comment

    Analysing Mutual Exclusion using Process Algebra with Signals

    Get PDF
    In contrast to common belief, the Calculus of Communicating Systems (CCS) and similar process algebras lack the expressive power to accurately capture mutual exclusion protocols without enriching the language with fairness assumptions. Adding a fairness assumption to implement a mutual exclusion protocol seems counter-intuitive. We employ a signalling operator, which can be combined with CCS, or other process calculi, and show that this minimal extension is expressive enough to model mutual exclusion: we confirm the correctness of Peterson's mutual exclusion algorithm for two processes, as well as Lamport's bakery algorithm, under reasonable assumptions on the underlying memory model. The correctness of Peterson's algorithm for more than two processes requires stronger, less realistic assumptions on the underlying memory model.Comment: In Proceedings EXPRESS/SOS 2017, arXiv:1709.0004
    • …
    corecore