968 research outputs found

    Model checking medium access control for sensor networks

    Get PDF
    We describe verification of S-MAC, a medium access control protocol designed for wireless sensor networks, by means of the PRISM model checker. The S-MAC protocol is built on top of the IEEE 802.11 standard for wireless ad hoc networks and, as such, it uses the same randomised backoff procedure as a means to avoid collision. In order to minimise energy consumption, in S-MAC, nodes are periodically put into a sleep state. Synchronisation of the sleeping schedules is necessary for the nodes to be able to communicate. Intuitively, energy saving obtained through a periodic sleep mechanism will be at the expense of performance. In previous work on S-MAC verification, a combination of analytical techniques and simulation has been used to confirm the correctness of this intuition for a simplified (abstract) version of the protocol in which the initial schedules coordination phase is assumed correct. We show how we have used the PRISM model checker to verify the behaviour of S-MAC and compare it to that of IEEE 802.11

    Practical applications of probabilistic model checking to communication protocols

    Get PDF
    Probabilistic model checking is a formal verification technique for the analysis of systems that exhibit stochastic behaviour. It has been successfully employed in an extremely wide array of application domains including, for example, communication and multimedia protocols, security and power management. In this chapter we focus on the applicability of these techniques to the analysis of communication protocols. An analysis of the performance of such systems must successfully incorporate several crucial aspects, including concurrency between multiple components, real-time constraints and randomisation. Probabilistic model checking, in particular using probabilistic timed automata, is well suited to such an analysis. We provide an overview of this area, with emphasis on an industrially relevant case study: the IEEE 802.3 (CSMA/CD) protocol. We also discuss two contrasting approaches to the implementation of probabilistic model checking, namely those based on numerical computation and those based on discrete-event simulation. Using results from the two tools PRISM and APMC, we summarise the advantages, disadvantages and trade-offs associated with these techniques

    Explicit Model Checking of Very Large MDP using Partitioning and Secondary Storage

    Full text link
    The applicability of model checking is hindered by the state space explosion problem in combination with limited amounts of main memory. To extend its reach, the large available capacities of secondary storage such as hard disks can be exploited. Due to the specific performance characteristics of secondary storage technologies, specialised algorithms are required. In this paper, we present a technique to use secondary storage for probabilistic model checking of Markov decision processes. It combines state space exploration based on partitioning with a block-iterative variant of value iteration over the same partitions for the analysis of probabilistic reachability and expected-reward properties. A sparse matrix-like representation is used to store partitions on secondary storage in a compact format. All file accesses are sequential, and compression can be used without affecting runtime. The technique has been implemented within the Modest Toolset. We evaluate its performance on several benchmark models of up to 3.5 billion states. In the analysis of time-bounded properties on real-time models, our method neutralises the state space explosion induced by the time bound in its entirety.Comment: The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-24953-7_1

    Scalable Verification of Markov Decision Processes

    Get PDF
    Markov decision processes (MDP) are useful to model concurrent process optimisation problems, but verifying them with numerical methods is often intractable. Existing approximative approaches do not scale well and are limited to memoryless schedulers. Here we present the basis of scalable verification for MDPSs, using an O(1) memory representation of history-dependent schedulers. We thus facilitate scalable learning techniques and the use of massively parallel verification.Comment: V4: FMDS version, 12 pages, 4 figure

    Smart Sampling for Lightweight Verification of Markov Decision Processes

    Get PDF
    Markov decision processes (MDP) are useful to model optimisation problems in concurrent systems. To verify MDPs with efficient Monte Carlo techniques requires that their nondeterminism be resolved by a scheduler. Recent work has introduced the elements of lightweight techniques to sample directly from scheduler space, but finding optimal schedulers by simple sampling may be inefficient. Here we describe "smart" sampling algorithms that can make substantial improvements in performance.Comment: IEEE conference style, 11 pages, 5 algorithms, 11 figures, 1 tabl

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling

    SCALABLE MULTI-HOP DATA DISSEMINATION IN VEHICULAR AD HOC NETWORKS

    Get PDF
    Vehicular Ad hoc Networks (VANETs) aim at improving road safety and travel comfort, by providing self-organizing environments to disseminate traffic data, without requiring fixed infrastructure or centralized administration. Since traffic data is of public interest and usually benefit a group of users rather than a specific individual, it is more appropriate to rely on broadcasting for data dissemination in VANETs. However, broadcasting under dense networks suffers from high percentage of data redundancy that wastes the limited radio channel bandwidth. Moreover, packet collisions may lead to the broadcast storm problem when large number of vehicles in the same vicinity rebroadcast nearly simultaneously. The broadcast storm problem is still challenging in the context of VANET, due to the rapid changes in the network topology, which are difficult to predict and manage. Existing solutions either do not scale well under high density scenarios, or require extra communication overhead to estimate traffic density, so as to manage data dissemination accordingly. In this dissertation, we specifically aim at providing an efficient solution for the broadcast storm problem in VANETs, in order to support different types of applications. A novel approach is developed to provide scalable broadcast without extra communication overhead, by relying on traffic regime estimation using speed data. We theoretically validate the utilization of speed instead of the density to estimate traffic flow. The results of simulating our approach under different density scenarios show its efficiency in providing scalable multi-hop data dissemination for VANETs

    Analysis of a gossip protocol in PRISM

    Full text link
    corecore