1,977 research outputs found
Model checking medium access control for sensor networks
We describe verification of S-MAC, a medium access control protocol designed for wireless sensor networks, by means of the PRISM model checker. The S-MAC protocol is built on top of the IEEE 802.11 standard for wireless ad hoc networks and, as such, it uses the same randomised backoff procedure as a means to avoid collision. In order to minimise energy consumption, in S-MAC, nodes are periodically put into a sleep state. Synchronisation of the sleeping schedules is necessary for the nodes to be able to communicate. Intuitively, energy saving obtained through a periodic sleep mechanism will be at the expense of performance. In previous work on S-MAC verification, a combination of analytical techniques and simulation has been used to confirm the correctness of this intuition for a simplified (abstract) version of the protocol in which the initial schedules coordination phase is assumed correct. We show how we have used the PRISM model checker to verify the behaviour of S-MAC and compare it to that of IEEE 802.11
Recommended from our members
Graph models for reachability analysis of concurrent programs
Reachability analysis is an attractive technique for analysis of concurrent programs because it is simple and relatively straightforward to automate, and can be used in conjunction with model-checking procedures to check for application-specific as well as general properties. Several techniques have been proposed differing mainly on the model used; some of these propose the use of flowgraph based models, some others of Petri nets.This paper addresses the question: What essential difference does it make, if any, what sort of finite-state model we extract from program texts for purposes of reachability analysis? How do they differ in expressive power, decision power, or accuracy? Since each is intended to model synchronization structure while abstracting away other features, one would expect them to be roughly equivalent.We confirm that there is no essential semantic difference between the most well known models proposed in the literature by providing algorithms for translation among these models. This implies that the choice of model rests on other factors, including convenience and efficiency.Since combinatorial explosion is the primary impediment to application of reachability analysis, a particular concern in choosing a model is facilitating divide-and-conquer analysis of large programs. Recently, much interest in finite-state verification systems has centered on algebraic theories of concurrency. Yeh and Young have exploited algebraic structure to decompose reachability analysis based on a flowgraph model. The semantic equivalence of graph and Petri net based models suggests that one ought to be able to apply a similar strategy for decomposing Petri nets. We show this is indeed possible through application of category theory
The STRESS Method for Boundary-point Performance Analysis of End-to-end Multicast Timer-Suppression Mechanisms
Evaluation of Internet protocols usually uses random scenarios or scenarios
based on designers' intuition. Such approach may be useful for average-case
analysis but does not cover boundary-point (worst or best-case) scenarios. To
synthesize boundary-point scenarios a more systematic approach is needed.In
this paper, we present a method for automatic synthesis of worst and best case
scenarios for protocol boundary-point evaluation.
Our method uses a fault-oriented test generation (FOTG) algorithm for
searching the protocol and system state space to synthesize these scenarios.
The algorithm is based on a global finite state machine (FSM) model. We extend
the algorithm with timing semantics to handle end-to-end delays and address
performance criteria. We introduce the notion of a virtual LAN to represent
delays of the underlying multicast distribution tree. The algorithms used in
our method utilize implicit backward search using branch and bound techniques
and start from given target events. This aims to reduce the search complexity
drastically. As a case study, we use our method to evaluate variants of the
timer suppression mechanism, used in various multicast protocols, with respect
to two performance criteria: overhead of response messages and response time.
Simulation results for reliable multicast protocols show that our method
provides a scalable way for synthesizing worst-case scenarios automatically.
Results obtained using stress scenarios differ dramatically from those obtained
through average-case analyses. We hope for our method to serve as a model for
applying systematic scenario generation to other multicast protocols.Comment: 24 pages, 10 figures, IEEE/ACM Transactions on Networking (ToN) [To
appear
Explicit Model Checking of Very Large MDP using Partitioning and Secondary Storage
The applicability of model checking is hindered by the state space explosion
problem in combination with limited amounts of main memory. To extend its
reach, the large available capacities of secondary storage such as hard disks
can be exploited. Due to the specific performance characteristics of secondary
storage technologies, specialised algorithms are required. In this paper, we
present a technique to use secondary storage for probabilistic model checking
of Markov decision processes. It combines state space exploration based on
partitioning with a block-iterative variant of value iteration over the same
partitions for the analysis of probabilistic reachability and expected-reward
properties. A sparse matrix-like representation is used to store partitions on
secondary storage in a compact format. All file accesses are sequential, and
compression can be used without affecting runtime. The technique has been
implemented within the Modest Toolset. We evaluate its performance on several
benchmark models of up to 3.5 billion states. In the analysis of time-bounded
properties on real-time models, our method neutralises the state space
explosion induced by the time bound in its entirety.Comment: The final publication is available at Springer via
http://dx.doi.org/10.1007/978-3-319-24953-7_1
Key Substitution in the Symbolic Analysis of Cryptographic Protocols (extended version)
Key substitution vulnerable signature schemes are signature schemes that
permit an intruder, given a public verification key and a signed message, to
compute a pair of signature and verification keys such that the message appears
to be signed with the new signature key. A digital signature scheme is said to
be vulnerable to destructive exclusive ownership property (DEO) If it is
computationaly feasible for an intruder, given a public verification key and a
pair of message and its valid signature relatively to the given public key, to
compute a pair of signature and verification keys and a new message such that
the given signature appears to be valid for the new message relatively to the
new verification key. In this paper, we prove decidability of the insecurity
problem of cryptographic protocols where the signature schemes employed in the
concrete realisation have this two properties
Efficient security for IPv6 multihoming
In this note, we propose a security mechanism for protecting IPv6
networks from possible abuses caused by the malicious usage of a
multihoming protocol. In the presented approach, each
multihomed node is assigned multiple prefixes from its upstream
providers, and it creates the interface identifier part of its
addresses by incorporating a cryptographic one-way hash of the
available prefix set. The result is that the addresses of each
multihomed node form an unalterable set of intrinsically bound
IPv6 addresses. This allows any node that is communicating with
the multihomed node to securely verify that all the alternative
addresses proposed through the multihoming protocol are
associated to the address used for establishing the communication.
The verification process is extremely efficient because it only
involves hash operationsPublicad
SDN-based parallel link discovery in optical transport networks
This is the peer reviewed version of the following article: Montero R, Agraz F, PagĂšs A, PerellĂł J, Spadaro S. SDNâbased parallel link discovery in optical transport networks. Trans Emerging Tel Tech. 2018;e3512. https://doi-org.recursos.biblioteca.upc.edu/10.1002/ett.3512, which has been published in final form at https://doi-org.recursos.biblioteca.upc.edu/10.1002/ett.3512. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving.The use of optical technologies in modern network scenarios has increased in the last decade, mostly due to their support in crucial networking topics (ie, bandwidth and scalability). In parallel, these scenarios have also experienced the emergence of a new paradigm recognized as software-defined networking (SDN), which bases on the decoupling of forwarding and control functions, with aims to provide a more efficient way to manage network resources compared to legacy networking architectures. As both SDN and optical technologies are constantly being introduced in different networking scenarios (eg, data centers, metro, and access networks), their coexistence becomes a must. In this matter, it is important to notice that SDN was initially designed for electronic-based networks; hence, its support for optical technologies is still at an early stage. Consequently, integration of both solutions still requires research efforts by the community. In this paper, we present a mechanism to address topology discovery in wavelength-switching optical transport networks (OTNs). In particular, we discuss the importance of the topology discovery function and analyse the proposed mechanism, which bases itself on the use of wavelength-specific signaling tones as link-binding data to provide preservice parallel link discovery in OTNs. Furthermore, we validate the method experimentally against an emulated OTN testbed with two different setups and compare the results to our previous work on this subject, achieving substantial reductions in the total topology discovery time.Peer ReviewedPostprint (author's final draft
Improvements on handling design errors in communication protocols.
With the rapid development of the Internet and distributed systems, communication protocols play a more and more important role. The correctness of the design of these communication protocols becomes crucial especially when critical applications are concerned. Common logical design errors in communication protocols include deadlock states, unspecified receptions, channel overflow, non-executable transitions, etc. Such design errors can be removed via protocol synthesis, or be detected through reachability analysis. The former may introduce more states and transitions than needed and the latter suffers from state space explosion problem. Here we present an improvement on existing technique to transform a protocol design into a deadlock-free one where the number of introduced new states and transitions can be considerably reduced. We also propose a sound reduction technique on a class of protocol designs to significantly reduce their sizes in order to perform reachability analysis.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2005 .D83. Source: Masters Abstracts International, Volume: 44-03, page: 1399. Thesis (M.Sc.)--University of Windsor (Canada), 2005
Quantitative Analysis for Authentication of Low-cost RFID Tags
Formal analysis techniques are widely used today in order to verify and
analyze communication protocols. In this work, we launch a quantitative
verification analysis for the low- cost Radio Frequency Identification (RFID)
protocol proposed by Song and Mitchell. The analysis exploits a Discrete-Time
Markov Chain (DTMC) using the well-known PRISM model checker. We have managed
to represent up to 100 RFID tags communicating with a reader and quantify each
RFID session according to the protocol's computation and transmission cost
requirements. As a consequence, not only does the proposed analysis provide
quantitative verification results, but also it constitutes a methodology for
RFID designers who want to validate their products under specific cost
requirements.Comment: To appear in the 36th IEEE Conference on Local Computer Networks (LCN
2011
- âŠ