86,986 research outputs found

    Abstraction in Model Checking Multi-Agent Systems

    No full text
    This thesis presents existential abstraction techniques for multi-agent systems preserving temporal-epistemic specifications. Multi-agent systems, defined in the interpreted system frameworks, are abstracted by collapsing the local states and actions of each agent. The goal of abstraction is to reduce the state space of the system under investigation in order to cope with the state explosion problem that impedes the verification of very large state space systems. Theoretical results show that the resulting abstract system simulates the concrete one. Preservation and correctness theorems are proved in this thesis. These theorems assure that if a temporal-epistemic formula holds on the abstract system, then the formula also holds on the concrete one. These results permit to verify temporal-epistemic formulas in abstract systems instead of the concrete ones, therefore saving time and space in the verification process. In order to test the applicability, usefulness, suitability, power and effectiveness of the abstraction method presented, two different implementations are presented: a tool for data-abstraction and one for variable-abstraction. The first technique achieves a state space reduction by collapsing the values of the domains of the system variables. The second technique performs a reduction on the size of the model by collapsing groups of two or more variables. Therefore, the abstract system has a reduced number of variables. Each new variable in the abstract system takes values belonging to a new domain built automatically by the tool. Both implementations perform abstraction in a fully automatic way. They operate on multi agents models specified in a formal language, called ISPL (Interpreted System Programming Language). This is the input language for MCMAS, a model checker for multi-agent systems. The output is an ISPL file as well (with a reduced state space). This thesis also presents several suitable temporal-epistemic examples to evaluate both techniques. The experiments show good results and point to the attractiveness of the temporal-epistemic abstraction techniques developed in this thesis. In particular, the contributions of the thesis are the following ones: • We produced correctness and preservation theoretical results for existential abstraction. • We introduced two algorithms to perform data-abstraction and variable-abstraction on multi-agent systems. • We developed two software toolkits for automatic abstraction on multi-agent scenarios: one tool performing data-abstraction and the second performing variable-abstraction. • We evaluated the methodologies introduced in this thesis by running experiments on several multi-agent system examples

    Computer-aided verification in mechanism design

    Full text link
    In mechanism design, the gold standard solution concepts are dominant strategy incentive compatibility and Bayesian incentive compatibility. These solution concepts relieve the (possibly unsophisticated) bidders from the need to engage in complicated strategizing. While incentive properties are simple to state, their proofs are specific to the mechanism and can be quite complex. This raises two concerns. From a practical perspective, checking a complex proof can be a tedious process, often requiring experts knowledgeable in mechanism design. Furthermore, from a modeling perspective, if unsophisticated agents are unconvinced of incentive properties, they may strategize in unpredictable ways. To address both concerns, we explore techniques from computer-aided verification to construct formal proofs of incentive properties. Because formal proofs can be automatically checked, agents do not need to manually check the properties, or even understand the proof. To demonstrate, we present the verification of a sophisticated mechanism: the generic reduction from Bayesian incentive compatible mechanism design to algorithm design given by Hartline, Kleinberg, and Malekian. This mechanism presents new challenges for formal verification, including essential use of randomness from both the execution of the mechanism and from the prior type distributions. As an immediate consequence, our work also formalizes Bayesian incentive compatibility for the entire family of mechanisms derived via this reduction. Finally, as an intermediate step in our formalization, we provide the first formal verification of incentive compatibility for the celebrated Vickrey-Clarke-Groves mechanism

    Design and Optimisation of the FlyFast Front-end for Attribute-based Coordination

    Get PDF
    Collective Adaptive Systems (CAS) consist of a large number of interacting objects. The design of such systems requires scalable analysis tools and methods, which have necessarily to rely on some form of approximation of the system's actual behaviour. Promising techniques are those based on mean-field approximation. The FlyFast model-checker uses an on-the-fly algorithm for bounded PCTL model-checking of selected individual(s) in the context of very large populations whose global behaviour is approximated using deterministic limit mean-field techniques. Recently, a front-end for FlyFast has been proposed which provides a modelling language, PiFF in the sequel, for the Predicate-based Interaction for FlyFast. In this paper we present details of PiFF design and an approach to state-space reduction based on probabilistic bisimulation for inhomogeneous DTMCs.Comment: In Proceedings QAPL 2017, arXiv:1707.0366

    Model checking learning agent systems using Promela with embedded C code and abstraction

    Get PDF
    As autonomous systems become more prevalent, methods for their verification will become more widely used. Model checking is a formal verification technique that can help ensure the safety of autonomous systems, but in most cases it cannot be applied by novices, or in its straight \off-the-shelf" form. In order to be more widely applicable it is crucial that more sophisticated techniques are used, and are presented in a way that is reproducible by engineers and verifiers alike. In this paper we demonstrate in detail two techniques that are used to increase the power of model checking using the model checker SPIN. The first of these is the use of embedded C code within Promela specifications, in order to accurately re ect robot movement. The second is to use abstraction together with a simulation relation to allow us to verify multiple environments simultaneously. We apply these techniques to a fairly simple system in which a robot moves about a fixed circular environment and learns to avoid obstacles. The learning algorithm is inspired by the way that insects learn to avoid obstacles in response to pain signals received from their antennae. Crucially, we prove that our abstraction is sound for our example system { a step that is often omitted but is vital if formal verification is to be widely accepted as a useful and meaningful approach

    Complexity and Unwinding for Intransitive Noninterference

    Full text link
    The paper considers several definitions of information flow security for intransitive policies from the point of view of the complexity of verifying whether a finite-state system is secure. The results are as follows. Checking (i) P-security (Goguen and Meseguer), (ii) IP-security (Haigh and Young), and (iii) TA-security (van der Meyden) are all in PTIME, while checking TO-security (van der Meyden) is undecidable, as is checking ITO-security (van der Meyden). The most important ingredients in the proofs of the PTIME upper bounds are new characterizations of the respective security notions, which also lead to new unwinding proof techniques that are shown to be sound and complete for these notions of security, and enable the algorithms to return simple counter-examples demonstrating insecurity. Our results for IP-security improve a previous doubly exponential bound of Hadj-Alouane et al

    Symmetry Reduction Enables Model Checking of More Complex Emergent Behaviours of Swarm Navigation Algorithms

    Full text link
    The emergent global behaviours of robotic swarms are important to achieve their navigation task goals. These emergent behaviours can be verified to assess their correctness, through techniques like model checking. Model checking exhaustively explores all possible behaviours, based on a discrete model of the system, such as a swarm in a grid. A common problem in model checking is the state-space explosion that arises when the states of the model are numerous. We propose a novel implementation of symmetry reduction, in the form of encoding navigation algorithms relatively with respect to a reference, based on the symmetrical properties of swarms in grids. We applied the relative encoding to a swarm navigation algorithm, Alpha, modelled for the NuSMV model checker. A comparison of the state-space and verification results with an absolute (or global) and a relative encoding of the Alpha algorithm highlights the advantages of our approach, allowing model checking larger grid sizes and number of robots, and consequently, verifying more complex emergent behaviours. For example, a property was verified for a grid with 3 robots and a maximum allowed size of 8x8 cells in a global encoding, whereas this size was increased to 16x16 using a relative encoding. Also, the time to verify a property for a swarm of 3 robots in a 6x6 grid was reduced from almost 10 hours to only 7 minutes. Our approach is transferable to other swarm navigation algorithms.Comment: Accepted for presentation in Towards Autonomous Robotic Systems (TAROS) 2015, Liverpool, U
    • …
    corecore