30,303 research outputs found

    Computer-aided verification in mechanism design

    Full text link
    In mechanism design, the gold standard solution concepts are dominant strategy incentive compatibility and Bayesian incentive compatibility. These solution concepts relieve the (possibly unsophisticated) bidders from the need to engage in complicated strategizing. While incentive properties are simple to state, their proofs are specific to the mechanism and can be quite complex. This raises two concerns. From a practical perspective, checking a complex proof can be a tedious process, often requiring experts knowledgeable in mechanism design. Furthermore, from a modeling perspective, if unsophisticated agents are unconvinced of incentive properties, they may strategize in unpredictable ways. To address both concerns, we explore techniques from computer-aided verification to construct formal proofs of incentive properties. Because formal proofs can be automatically checked, agents do not need to manually check the properties, or even understand the proof. To demonstrate, we present the verification of a sophisticated mechanism: the generic reduction from Bayesian incentive compatible mechanism design to algorithm design given by Hartline, Kleinberg, and Malekian. This mechanism presents new challenges for formal verification, including essential use of randomness from both the execution of the mechanism and from the prior type distributions. As an immediate consequence, our work also formalizes Bayesian incentive compatibility for the entire family of mechanisms derived via this reduction. Finally, as an intermediate step in our formalization, we provide the first formal verification of incentive compatibility for the celebrated Vickrey-Clarke-Groves mechanism

    Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems

    Full text link
    Due to the increasing usage of machine learning (ML) techniques in security- and safety-critical domains, such as autonomous systems and medical diagnosis, ensuring correct behavior of ML systems, especially for different corner cases, is of growing importance. In this paper, we propose a generic framework for evaluating security and robustness of ML systems using different real-world safety properties. We further design, implement and evaluate VeriVis, a scalable methodology that can verify a diverse set of safety properties for state-of-the-art computer vision systems with only blackbox access. VeriVis leverage different input space reduction techniques for efficient verification of different safety properties. VeriVis is able to find thousands of safety violations in fifteen state-of-the-art computer vision systems including ten Deep Neural Networks (DNNs) such as Inception-v3 and Nvidia's Dave self-driving system with thousands of neurons as well as five commercial third-party vision APIs including Google vision and Clarifai for twelve different safety properties. Furthermore, VeriVis can successfully verify local safety properties, on average, for around 31.7% of the test images. VeriVis finds up to 64.8x more violations than existing gradient-based methods that, unlike VeriVis, cannot ensure non-existence of any violations. Finally, we show that retraining using the safety violations detected by VeriVis can reduce the average number of violations up to 60.2%.Comment: 16 pages, 11 tables, 11 figure

    Metamodel Instance Generation: A systematic literature review

    Get PDF
    Modelling and thus metamodelling have become increasingly important in Software Engineering through the use of Model Driven Engineering. In this paper we present a systematic literature review of instance generation techniques for metamodels, i.e. the process of automatically generating models from a given metamodel. We start by presenting a set of research questions that our review is intended to answer. We then identify the main topics that are related to metamodel instance generation techniques, and use these to initiate our literature search. This search resulted in the identification of 34 key papers in the area, and each of these is reviewed here and discussed in detail. The outcome is that we are able to identify a knowledge gap in this field, and we offer suggestions as to some potential directions for future research.Comment: 25 page

    Formal Verification of Security Protocol Implementations: A Survey

    Get PDF
    Automated formal verification of security protocols has been mostly focused on analyzing high-level abstract models which, however, are significantly different from real protocol implementations written in programming languages. Recently, some researchers have started investigating techniques that bring automated formal proofs closer to real implementations. This paper surveys these attempts, focusing on approaches that target the application code that implements protocol logic, rather than the libraries that implement cryptography. According to these approaches, libraries are assumed to correctly implement some models. The aim is to derive formal proofs that, under this assumption, give assurance about the application code that implements the protocol logic. The two main approaches of model extraction and code generation are presented, along with the main techniques adopted for each approac

    Verification of Imperative Programs by Constraint Logic Program Transformation

    Full text link
    We present a method for verifying partial correctness properties of imperative programs that manipulate integers and arrays by using techniques based on the transformation of constraint logic programs (CLP). We use CLP as a metalanguage for representing imperative programs, their executions, and their properties. First, we encode the correctness of an imperative program, say prog, as the negation of a predicate 'incorrect' defined by a CLP program T. By construction, 'incorrect' holds in the least model of T if and only if the execution of prog from an initial configuration eventually halts in an error configuration. Then, we apply to program T a sequence of transformations that preserve its least model semantics. These transformations are based on well-known transformation rules, such as unfolding and folding, guided by suitable transformation strategies, such as specialization and generalization. The objective of the transformations is to derive a new CLP program TransfT where the predicate 'incorrect' is defined either by (i) the fact 'incorrect.' (and in this case prog is not correct), or by (ii) the empty set of clauses (and in this case prog is correct). In the case where we derive a CLP program such that neither (i) nor (ii) holds, we iterate the transformation. Since the problem is undecidable, this process may not terminate. We show through examples that our method can be applied in a rather systematic way, and is amenable to automation by transferring to the field of program verification many techniques developed in the field of program transformation.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455

    A graph-based aspect interference detection approach for UML-based aspect-oriented models

    Get PDF
    Aspect Oriented Modeling (AOM) techniques facilitate separate modeling of concerns and allow for a more flexible composition of these than traditional modeling technique. While this improves the understandability of each submodel, in order to reason about the behavior of the composed system and to detect conflicts among submodels, automated tool support is required. Current techniques for conflict detection among aspects generally have at least one of the following weaknesses. They require to manually model the abstract semantics for each system; or they derive the system semantics from code assuming one specific aspect-oriented language. Defining an extra semantics model for verification bears the risk of inconsistencies between the actual and the verified design; verifying only at implementation level hinders fixng errors in earlier phases. We propose a technique for fully automatic detection of conflicts between aspects at the model level; more specifically, our approach works on UML models with an extension for modeling pointcuts and advice. As back-end we use a graph-based model checker, for which we have defined an operational semantics of UML diagrams, pointcuts and advice. In order to simulate the system, we automatically derive a graph model from the diagrams. The result is another graph, which represents all possible program executions, and which can be verified against a declarative specification of invariants.\ud To demonstrate our approach, we discuss a UML-based AOM model of the "Crisis Management System" and a possible design and evolution scenario. The complexity of the system makes con°icts among composed aspects hard to detect: already in the case of two simulated aspects, the state space contains 623 di®erent states and 9 different execution paths. Nevertheless, in case the right pruning methods are used, the state-space only grows linearly with the number of aspects; therefore, the automatic analysis scales

    Sharpening the Cutting Edge: Corporate Action for a Strong, Low-Carbon Economy

    Get PDF
    Outlines lessons learned from early efforts to create a low-carbon economy, current and emerging best practices, and next steps, including climate change metrics, greenhouse gas reporting, effective climate policy, and long-term investment choices

    Verifying Temporal Properties of Reactive Systems by Transformation

    Full text link
    We show how program transformation techniques can be used for the verification of both safety and liveness properties of reactive systems. In particular, we show how the program transformation technique distillation can be used to transform reactive systems specified in a functional language into a simplified form that can subsequently be analysed to verify temporal properties of the systems. Example systems which are intended to model mutual exclusion are analysed using these techniques with respect to both safety (mutual exclusion) and liveness (non-starvation), with the errors they contain being correctly identified.Comment: In Proceedings VPT 2015, arXiv:1512.02215. This work was supported, in part, by Science Foundation Ireland grant 10/CE/I1855 to Lero - the Irish Software Engineering Research Centre (www.lero.ie), and by the School of Computing, Dublin City Universit
    corecore