59,303 research outputs found

    Bounded Expectations: Resource Analysis for Probabilistic Programs

    Full text link
    This paper presents a new static analysis for deriving upper bounds on the expected resource consumption of probabilistic programs. The analysis is fully automatic and derives symbolic bounds that are multivariate polynomials of the inputs. The new technique combines manual state-of-the-art reasoning techniques for probabilistic programs with an effective method for automatic resource-bound analysis of deterministic programs. It can be seen as both, an extension of automatic amortized resource analysis (AARA) to probabilistic programs and an automation of manual reasoning for probabilistic programs that is based on weakest preconditions. As a result, bound inference can be reduced to off-the-shelf LP solving in many cases and automatically-derived bounds can be interactively extended with standard program logics if the automation fails. Building on existing work, the soundness of the analysis is proved with respect to an operational semantics that is based on Markov decision processes. The effectiveness of the technique is demonstrated with a prototype implementation that is used to automatically analyze 39 challenging probabilistic programs and randomized algorithms. Experimental results indicate that the derived constant factors in the bounds are very precise and even optimal for many programs

    Towards an Abstract Domain for Resource Analysis of Logic Programs Using Sized Types

    Get PDF
    We present a novel general resource analysis for logic programs based on sized types.Sized types are representations that incorporate structural (shape) information and allow expressing both lower and upper bounds on the size of a set of terms and their subterms at any position and depth. They also allow relating the sizes of terms and subterms occurring at different argument positions in logic predicates. Using these sized types, the resource analysis can infer both lower and upper bounds on the resources used by all the procedures in a program as functions on input term (and subterm) sizes, overcoming limitations of existing analyses and enhancing their precision. Our new resource analysis has been developed within the abstract interpretation framework, as an extension of the sized types abstract domain, and has been integrated into the Ciao preprocessor, CiaoPP. The abstract domain operations are integrated with the setting up and solving of recurrence equations for both, inferring size and resource usage functions. We show that the analysis is an improvement over the previous resource analysis present in CiaoPP and compares well in power to state of the art systems.Comment: Part of WLPE 2013 proceedings (arXiv:1308.2055

    Synthesis for Polynomial Lasso Programs

    Full text link
    We present a method for the synthesis of polynomial lasso programs. These programs consist of a program stem, a set of transitions, and an exit condition, all in the form of algebraic assertions (conjunctions of polynomial equalities). Central to this approach is the discovery of non-linear (algebraic) loop invariants. We extend Sankaranarayanan, Sipma, and Manna's template-based approach and prove a completeness criterion. We perform program synthesis by generating a constraint whose solution is a synthesized program together with a loop invariant that proves the program's correctness. This constraint is non-linear and is passed to an SMT solver. Moreover, we can enforce the termination of the synthesized program with the support of test cases.Comment: Paper at VMCAI'14, including appendi

    On Verifying Resource Contracts using Code Contracts

    Full text link
    In this paper we present an approach to check resource consumption contracts using an off-the-shelf static analyzer. We propose a set of annotations to support resource usage specifications, in particular, dynamic memory consumption constraints. Since dynamic memory may be recycled by a memory manager, the consumption of this resource is not monotone. The specification language can express both memory consumption and lifetime properties in a modular fashion. We develop a proof-of-concept implementation by extending Code Contracts' specification language. To verify the correctness of these annotations we rely on the Code Contracts static verifier and a points-to analysis. We also briefly discuss possible extensions of our approach to deal with non-linear expressions.Comment: In Proceedings LAFM 2013, arXiv:1401.056

    Efficient Algorithms for Asymptotic Bounds on Termination Time in VASS

    Full text link
    Vector Addition Systems with States (VASS) provide a well-known and fundamental model for the analysis of concurrent processes, parameterized systems, and are also used as abstract models of programs in resource bound analysis. In this paper we study the problem of obtaining asymptotic bounds on the termination time of a given VASS. In particular, we focus on the practically important case of obtaining polynomial bounds on termination time. Our main contributions are as follows: First, we present a polynomial-time algorithm for deciding whether a given VASS has a linear asymptotic complexity. We also show that if the complexity of a VASS is not linear, it is at least quadratic. Second, we classify VASS according to quantitative properties of their cycles. We show that certain singularities in these properties are the key reason for non-polynomial asymptotic complexity of VASS. In absence of singularities, we show that the asymptotic complexity is always polynomial and of the form Θ(nk)\Theta(n^k), for some integer kdk\leq d, where dd is the dimension of the VASS. We present a polynomial-time algorithm computing the optimal kk. For general VASS, the same algorithm, which is based on a complete technique for the construction of ranking functions in VASS, produces a valid lower bound, i.e., a kk such that the termination complexity is Ω(nk)\Omega(n^k). Our results are based on new insights into the geometry of VASS dynamics, which hold the potential for further applicability to VASS analysis.Comment: arXiv admin note: text overlap with arXiv:1708.0925

    Live Heap Space Analysis for Languages with Garbage Collection

    Get PDF
    The peak heap consumption of a program is the maximum size of the live data on the heap during the execution of the program, i.e., the minimum amount of heap space needed to run the program without exhausting the memory. It is well-known that garbage collection (GC) makes the problem of predicting the memory required to run a program difficult. This paper presents, the best of our knowledge, the first live heap space analysis for garbage-collected languages which infers accurate upper bounds on the peak heap usage of a program’s execution that are not restricted to any complexity class, i.e., we can infer exponential, logarithmic, polynomial, etc., bounds. Our analysis is developed for an (sequential) object-oriented bytecode language with a scoped-memory manager that reclaims unreachable memory when methods return. We also show how our analysis can accommodate other GC schemes which are closer to the ideal GC which collects objects as soon as they become unreachable. The practicality of our approach is experimentally evaluated on a prototype implementation.We demonstrate that it is fully automatic, reasonably accurate and efficient by inferring live heap space bounds for a standardized set of benchmarks, the JOlden suite

    An Approach to Static Performance Guarantees for Programs with Run-time Checks

    Full text link
    Instrumenting programs for performing run-time checking of properties, such as regular shapes, is a common and useful technique that helps programmers detect incorrect program behaviors. This is specially true in dynamic languages such as Prolog. However, such run-time checks inevitably introduce run-time overhead (in execution time, memory, energy, etc.). Several approaches have been proposed for reducing such overhead, such as eliminating the checks that can statically be proved to always succeed, and/or optimizing the way in which the (remaining) checks are performed. However, there are cases in which it is not possible to remove all checks statically (e.g., open libraries which must check their interfaces, complex properties, unknown code, etc.) and in which, even after optimizations, these remaining checks still may introduce an unacceptable level of overhead. It is thus important for programmers to be able to determine the additional cost due to the run-time checks and compare it to some notion of admissible cost. The common practice used for estimating run-time checking overhead is profiling, which is not exhaustive by nature. Instead, we propose a method that uses static analysis to estimate such overhead, with the advantage that the estimations are functions parameterized by input data sizes. Unlike profiling, this approach can provide guarantees for all possible execution traces, and allows assessing how the overhead grows as the size of the input grows. Our method also extends an existing assertion verification framework to express "admissible" overheads, and statically and automatically checks whether the instrumented program conforms with such specifications. Finally, we present an experimental evaluation of our approach that suggests that our method is feasible and promising.Comment: 15 pages, 3 tables; submitted to ICLP'18, accepted as technical communicatio
    corecore