11 research outputs found

    A Calculus for Modular Loop Acceleration

    Get PDF

    Proving Non-Termination via Loop Acceleration

    Get PDF
    We present the first approach to prove non-termination of integer programs that is based on loop acceleration. If our technique cannot show non-termination of a loop, it tries to accelerate it instead in order to find paths to other non-terminating loops automatically. The prerequisites for our novel loop acceleration technique generalize a simple yet effective non-termination criterion. Thus, we can use the same program transformations to facilitate both non-termination proving and loop acceleration. In particular, we present a novel invariant inference technique that is tailored to our approach. An extensive evaluation of our fully automated tool LoAT shows that it is competitive with the state of the art

    A Calculus for Modular Loop Acceleration

    Get PDF
    Loop acceleration can be used to prove safety, reachability, runtime bounds, and (non-)termination of programs operating on integers. To this end, a variety of acceleration techniques has been proposed. However, all of them are monolithic: Either they accelerate a loop successfully or they fail completely. In contrast, we present a calculus that allows for combining acceleration techniques in a modular way and we show how to integrate many existing acceleration techniques into our calculus. Moreover, we propose two novel acceleration techniques that can be incorporated into our calculus seamlessly. An empirical evaluation demonstrates the applicability of our approach

    On Complexity Bounds and Confluence of Parallel Term Rewriting

    Full text link
    We revisit parallel-innermost term rewriting as a model of parallel computation on inductive data structures and provide a corresponding notion of runtime complexity parametric in the size of the start term. We propose automatic techniques to derive both upper and lower bounds on parallel complexity of rewriting that enable a direct reuse of existing techniques for sequential complexity. Our approach to find lower bounds requires confluence of the parallel-innermost rewrite relation, thus we also provide effective sufficient criteria for proving confluence. The applicability and the precision of the method are demonstrated by the relatively light effort in extending the program analysis tool AProVE and by experiments on numerous benchmarks from the literature.Comment: Under submission to Fundamenta Informaticae. arXiv admin note: substantial text overlap with arXiv:2208.0100

    Machine learning for function synthesis

    Get PDF
    Function synthesis is the process of automatically constructing functions that satisfy a given specification. The space of functions as well as the format of the specifications vary greatly with each area of application. In this thesis, we consider synthesis in the context of satisfiability modulo theories. Within this domain, the goal is to synthesise mathematical expressions that adhere to abstract logical formulas. These types of synthesis problems find many applications in the field of computer-aided verification. One of the main challenges of function synthesis arises from the combinatorial explosion in the number of potential candidates within a certain size. The hypothesis of this thesis is that machine learning methods can be applied to make function synthesis more tractable. The first contribution of this thesis is a Monte-Carlo based search method for function synthesis. The search algorithm uses machine learned heuristics to guide the search. This is part of a reinforcement learning loop that trains the machine learning models with data generated from previous search attempts. To increase the set of benchmark problems to train and test synthesis methods, we also present a technique for generating synthesis problems from pre-existing satisfiability modulo theories problems. We implement the Monte-Carlo based synthesis algorithm and evaluate it on standard synthesis benchmarks as well as our newly generated benchmarks. An experimental evaluation shows that the learned heuristics greatly improve on the baseline without trained models. Furthermore, the machine learned guidance demonstrates comparable performance to CVC5 and, in some experiments, even surpasses it. Next, this thesis explores the application of machine learning to more restricted function synthesis domains. We hypothesise that narrowing the scope enables the use of machine learning techniques that are not possible in the general setting. We test this hypothesis by considering the problem of ranking function synthesis. Ranking functions are used in program analysis to prove termination of programs by mapping consecutive program states to decreasing elements of a well-founded set. The second contribution of this dissertation is a novel technique for synthesising ranking functions, using neural networks. The key insight is that instead of synthesising a mathematical expression that represents a ranking function, we can train a neural network to act as a ranking function. Hence, the synthesis procedure is replaced by neural network training. We introduce Neural Termination Analysis as a framework that leverages this. We train neural networks from sampled execution traces of the program we want to prove terminating. We enforce the synthesis specifications of ranking functions using the loss function and network design. After training, we use symbolic reasoning to formally verify that the resulting function is indeed a correct ranking function for the target program. We demonstrate that our method succeeds in synthesising ranking functions for programs that are beyond the reach of state-of-the-art tools. This includes programs with disjunctions and non-linear expressions in the loop guards

    Superposition for Higher-Order Logic

    Get PDF
    corecore