196 research outputs found

    Verification of loop parallelisations

    Get PDF
    Writing correct parallel programs becomes more and more difficult as the complexity and heterogeneity of processors increase. This issue is addressed by parallelising compilers. Various compiler directives can be used to tell these compilers where to parallelise. This paper addresses the correctness of such compiler directives for loop parallelisation. Specifically, we propose a technique based on separation logic to verify whether a loop can be parallelised. Our approach requires each loop iteration to be specified with the locations that are read and written in this iteration. If the specifications are correct, they can be used to draw conclusions about loop (in)dependences. Moreover, they also reveal where synchronisation is needed in the parallelised program. The loop iteration specifications can be verified using permission-based separation logic and seamlessly integrate with functional behaviour specifications. We formally prove the correctness of our approach and we discuss automated tool support for our technique. Additionally, we also discuss how the loop iteration contracts can be compiled into specifications for the code coming out of the parallelising compiler

    Coeffects: Unified static analysis of context-dependence

    Get PDF
    Monadic effect systems provide a unified way of tracking effects of computations, but there is no unified mechanism for tracking how computations rely on the environment in which they are executed. This is becoming an important problem for modern software – we need to track where distributed computations run, which resources a program uses and how they use other capabilities of the environment. We consider three examples of context-dependence analysis: liveness analysis, tracking the use of implicit parameters (similar to tracking of resource usage in distributed computation), and calculating caching requirements for dataflow programs. Informed by these cases, we present a unified calculus for tracking context dependence in functional languages together with a categorical semantics based on indexed comonads. We believe that indexed comonads are the right foundation for constructing context-aware languages and type systems and that following an approach akin to monads can lead to a widespread use of the concept

    Specification and verification of atomic operations in GPGPU programs

    Get PDF
    We propose a specification and verification technique based on separation logic to reason about data race freedom and functional correctness of GPU kernels that use atomic operations as synchronisation mechanism. Our approach exploits the notion of resource invariant from Concurrent Separation Logic (CSL) to capture the behaviour of atomic operations. However, because of the different memory levels in the GPU architecture, we adapt this notion of resource invariant to these memory levels, i.e., group resource invariants capture the behaviour of atomic operations that access locations in local memory, while kernel resource invariants capture the behaviour of atomic operations that access locations in global memory. We show soundness of our approach and we provide tool support that enables us to verify kernels from standard benchmarks suites

    On Models and Code:A Unified Approach to Support Large-Scale Deductive Program Verification

    Get PDF
    Despite the substantial progress in the area of deductive program verification over the last years, it still remains a challenge to use deductive verification on large-scale industrial applications. In this abstract, I analyse why this is case, and I argue that in order to solve this, we need to soften the border between models and code. This has two important advantages: (1) it would make it easier to reason about high-level behaviour of programs, using deductive verification, and (2) it would allow to reason about incomplete applications during the development process. I discuss how the first steps towards this goal are supported by verification techniques within the VerCors project, and I will sketch the future steps that are necessary to realise this goal

    Verifying Class Invariants in Concurrent Programs

    Get PDF
    Class invariants are a highly useful feature for the verification of object-oriented programs, because they can be used to capture all valid object states. In a sequential program setting, the validity of class invariants is typically described in terms of a visible state semantics, i.e., invariants only have to hold whenever a method begins or ends execution, and they may be broken inside a method body. However, in a concurrent setting, this restriction is no longer usable, because due to thread interleavings, any program state is potentially a visible state. In this paper we present a new approach for reasoning about class invariants in multithreaded programs. We allow a thread to explicitly break an invariant at specific program locations, while ensuring that no other thread can observe the broken invariant. We develop our technique in a permission-based separation logic environment. However, we deviate from separation logic's standard rules and allow a class invariant to express properties over shared memory locations (the invariant footprint), independently of the permissions on these locations. In this way, a thread may break or reestablish an invariant without holding permissions to all locations in its footprint. To enable modular verification, we adopt the restrictions of Muller's ownership-based type system

    Automating Deductive Verification for Weak-Memory Programs

    Full text link
    Writing correct programs for weak memory models such as the C11 memory model is challenging because of the weak consistency guarantees these models provide. The first program logics for the verification of such programs have recently been proposed, but their usage has been limited thus far to manual proofs. Automating proofs in these logics via first-order solvers is non-trivial, due to reasoning features such as higher-order assertions, modalities and rich permission resources. In this paper, we provide the first implementation of a weak memory program logic using existing deductive verification tools. We tackle three recent program logics: Relaxed Separation Logic and two forms of Fenced Separation Logic, and show how these can be encoded using the Viper verification infrastructure. In doing so, we illustrate several novel encoding techniques which could be employed for other logics. Our work is implemented, and has been evaluated on examples from existing papers as well as the Facebook open-source Folly library.Comment: Extended version of TACAS 2018 publicatio

    Towards Scientific Incident Response

    Get PDF
    A scientific incident analysis is one with a methodical, justifiable approach to the human decision-making process. Incident analysis is a good target for additional rigor because it is the most human-intensive part of incident response. Our goal is to provide the tools necessary for specifying precisely the reasoning process in incident analysis. Such tools are lacking, and are a necessary (though not sufficient) component of a more scientific analysis process. To reach this goal, we adapt tools from program verification that can capture and test abductive reasoning. As Charles Peirce coined the term in 1900, “Abduction is the process of forming an explanatory hypothesis. It is the only logical operation which introduces any new idea.” We reference canonical examples as paradigms of decision-making during analysis. With these examples in mind, we design a logic capable of expressing decision-making during incident analysis. The result is that we can express, in machine-readable and precise language, the abductive hypotheses than an analyst makes, and the results of evaluating them. This result is beneficial because it opens up the opportunity of genuinely comparing analyst processes without revealing sensitive system details, as well as opening an opportunity towards improved decision-support via limited automation

    Lambda Definability with Sums via Grothendieck Logical Relations

    Get PDF
    . We introduce a notion of Grothendieck logical relation and use it to characterise the definability of morphisms in stable bicartesian closed categories by terms of the simply-typed lambda calculus with finite products and finite sums. Our techniques are based on concepts from topos theory, however our exposition is elementary. Introduction The use of logical relations as a tool for characterising the -definable elements in a model of the simply-typed -calculus originated in the work of Plotkin [10], who obtained such a characterisation of the definable elements in the full type hierarchy using a notion of Kripke logical relation. Subsequently, the more general notion of a Kripke logical relation of varying arity was developed by Jung and Tiuryn, and shown to characterise the definable elements in any Henkin model [4]. Although not emphasised in [4], relations of varying arity are powerful enough to characterise relative definability with respect to any given set of elements consider..

    Local reasoning about the presence of bugs: Incorrectness Separation Logic

    Get PDF
    There has been a large body of work on local reasoning for proving the absence of bugs, but none for proving their presence. We present a new formal framework for local reasoning about the presence of bugs, building on two complementary foundations: 1) separation logic and 2) incorrectness logic. We explore the theory of this new incorrectness separation logic (ISL), and use it to derive a begin-anywhere, intra-procedural symbolic execution analysis that has no false positives by construction. In so doing, we take a step towards transferring modular, scalable techniques from the world of program verification to bug catching
    corecore