29,879 research outputs found

    Toward Scalable Verification for Safety-Critical Deep Networks

    Get PDF
    The increasing use of deep neural networks for safety-critical applications, such as autonomous driving and flight control, raises concerns about their safety and reliability. Formal verification can address these concerns by guaranteeing that a deep learning system operates as intended, but the state of the art is limited to small systems. In this work-in-progress report we give an overview of our work on mitigating this difficulty, by pursuing two complementary directions: devising scalable verification techniques, and identifying design choices that result in deep learning systems that are more amenable to verification

    SafeDeep: A Scalable Robustness Verification Framework for Deep Neural Networks

    Get PDF
    The state-of-the-art machine learning techniques come with limited, if at all any, formal correctness guarantees. This has been demonstrated by adversarial examples in the deep learning domain. To address this challenge, here, we propose a scalable robustness verification framework for Deep Neural Networks (DNNs). The framework relies on Linear Programming (LP) engines and builds on decades of advances in the field for analyzing convex approximations of the original network. The key insight is in the on-demand incremental refinement of these convex approximations. This refinement can be parallelized, making the framework even more scalable. We have implemented a prototype tool to verify the robustness of a large number of DNNs in epileptic seizure detection. We have compared the results with those obtained by two state-of-the-art tools for the verification of DNNs. We show that our framework is consistently more precise than the over-approximation-based tool ERAN and more scalable than the SMT-based tool Reluplex

    Automating Verification of State Machines with Reactive Designs and Isabelle/UTP

    Full text link
    State-machine based notations are ubiquitous in the description of component systems, particularly in the robotic domain. To ensure these systems are safe and predictable, formal verification techniques are important, and can be cost-effective if they are both automated and scalable. In this paper, we present a verification approach for a diagrammatic state machine language that utilises theorem proving and a denotational semantics based on Unifying Theories of Programming (UTP). We provide the necessary theory to underpin state machines (including induction theorems for iterative processes), mechanise an action language for states and transitions, and use these to formalise the semantics. We then describe the verification approach, which supports infinite state systems, and exemplify it with a fully automated deadlock-freedom check. The work has been mechanised in our proof tool, Isabelle/UTP, and so also illustrates the use of UTP to build practical verification tools.Comment: 18 pages, 16th Intl. Conf. on Formal Aspects of Component Software (FACS 2018), October 2018, Pohang, South Kore

    Verification of Multi-Agent Properties in Electronic Voting: A Case Study

    Full text link
    Formal verification of multi-agent systems is hard, both theoretically and in practice. In particular, studies that use a single verification technique typically show limited efficiency, and allow to verify only toy examples. Here, we propose some new techniques and combine them with several recently developed ones to see what progress can be achieved for a real-life scenario. Namely, we use fixpoint approximation, domination-based strategy search, partial order reduction, and parallelization to verify heterogeneous scalable models of the Selene e-voting protocol. The experimental results show that the combination allows to verify requirements for much more sophisticated models than previously

    Compositional Verification for Autonomous Systems with Deep Learning Components

    Full text link
    As autonomy becomes prevalent in many applications, ranging from recommendation systems to fully autonomous vehicles, there is an increased need to provide safety guarantees for such systems. The problem is difficult, as these are large, complex systems which operate in uncertain environments, requiring data-driven machine-learning components. However, learning techniques such as Deep Neural Networks, widely used today, are inherently unpredictable and lack the theoretical foundations to provide strong assurance guarantees. We present a compositional approach for the scalable, formal verification of autonomous systems that contain Deep Neural Network components. The approach uses assume-guarantee reasoning whereby {\em contracts}, encoding the input-output behavior of individual components, allow the designer to model and incorporate the behavior of the learning-enabled components working side-by-side with the other components. We illustrate the approach on an example taken from the autonomous vehicles domain

    Value Functions are Control Barrier Functions: Verification of Safe Policies using Control Theory

    Full text link
    Guaranteeing safe behaviour of reinforcement learning (RL) policies poses significant challenges for safety-critical applications, despite RL's generality and scalability. To address this, we propose a new approach to apply verification methods from control theory to learned value functions. By analyzing task structures for safety preservation, we formalize original theorems that establish links between value functions and control barrier functions. Further, we propose novel metrics for verifying value functions in safe control tasks and practical implementation details to improve learning. Our work presents a novel method for certificate learning, which unlocks a diversity of verification techniques from control theory for RL policies, and marks a significant step towards a formal framework for the general, scalable, and verifiable design of RL-based control systems

    Local reasoning about the presence of bugs: Incorrectness Separation Logic

    Get PDF
    There has been a large body of work on local reasoning for proving the absence of bugs, but none for proving their presence. We present a new formal framework for local reasoning about the presence of bugs, building on two complementary foundations: 1) separation logic and 2) incorrectness logic. We explore the theory of this new incorrectness separation logic (ISL), and use it to derive a begin-anywhere, intra-procedural symbolic execution analysis that has no false positives by construction. In so doing, we take a step towards transferring modular, scalable techniques from the world of program verification to bug catching

    Hoare-style Specifications as Correctness Conditions for Non-linearizable Concurrent Objects

    Get PDF
    Designing scalable concurrent objects, which can be efficiently used on multicore processors, often requires one to abandon standard specification techniques, such as linearizability, in favor of more relaxed consistency requirements. However, the variety of alternative correctness conditions makes it difficult to choose which one to employ in a particular case, and to compose them when using objects whose behaviors are specified via different criteria. The lack of syntactic verification methods for most of these criteria poses challenges in their systematic adoption and application. In this paper, we argue for using Hoare-style program logics as an alternative and uniform approach for specification and compositional formal verification of safety properties for concurrent objects and their client programs. Through a series of case studies, we demonstrate how an existing program logic for concurrency can be employed off-the-shelf to capture important state and history invariants, allowing one to explicitly quantify over interference of environment threads and provide intuitive and expressive Hoare-style specifications for several non-linearizable concurrent objects that were previously specified only via dedicated correctness criteria. We illustrate the adequacy of our specifications by verifying a number of concurrent client scenarios, that make use of the previously specified concurrent objects, capturing the essence of such correctness conditions as concurrency-aware linearizability, quiescent, and quantitative quiescent consistency. All examples described in this paper are verified mechanically in Coq.Comment: 18 page
    • 

    corecore