43,023 research outputs found
Practical applications of probabilistic model checking to communication protocols
Probabilistic model checking is a formal verification technique for the analysis of systems that exhibit stochastic behaviour. It has been successfully employed in an extremely wide array of application domains including, for example, communication and multimedia protocols, security and power management. In this chapter we focus on the applicability of these techniques to the analysis of communication protocols. An analysis of the performance of such systems must successfully incorporate several crucial aspects, including concurrency between multiple components, real-time constraints and randomisation. Probabilistic model checking, in particular using probabilistic timed automata, is well suited to such an analysis. We provide an overview of this area, with emphasis on an industrially relevant case study: the IEEE 802.3 (CSMA/CD) protocol. We also discuss two contrasting approaches to the implementation of probabilistic model checking, namely those based on numerical computation and those based on discrete-event simulation. Using results from the two tools PRISM and APMC, we summarise the advantages, disadvantages and trade-offs associated with these techniques
Uniform Random Sampling of Traces in Very Large Models
This paper presents some first results on how to perform uniform random walks
(where every trace has the same probability to occur) in very large models. The
models considered here are described in a succinct way as a set of
communicating reactive modules. The method relies upon techniques for counting
and drawing uniformly at random words in regular languages. Each module is
considered as an automaton defining such a language. It is shown how it is
possible to combine local uniform drawings of traces, and to obtain some global
uniform random sampling, without construction of the global model
Applying Formal Methods to Networking: Theory, Techniques and Applications
Despite its great importance, modern network infrastructure is remarkable for
the lack of rigor in its engineering. The Internet which began as a research
experiment was never designed to handle the users and applications it hosts
today. The lack of formalization of the Internet architecture meant limited
abstractions and modularity, especially for the control and management planes,
thus requiring for every new need a new protocol built from scratch. This led
to an unwieldy ossified Internet architecture resistant to any attempts at
formal verification, and an Internet culture where expediency and pragmatism
are favored over formal correctness. Fortunately, recent work in the space of
clean slate Internet design---especially, the software defined networking (SDN)
paradigm---offers the Internet community another chance to develop the right
kind of architecture and abstractions. This has also led to a great resurgence
in interest of applying formal methods to specification, verification, and
synthesis of networking protocols and applications. In this paper, we present a
self-contained tutorial of the formidable amount of work that has been done in
formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial
Genet: A Quickly Scalable Fat-Tree Overlay for Personal Volunteer Computing using WebRTC
WebRTC enables browsers to exchange data directly but the number of possible
concurrent connections to a single source is limited. We overcome the
limitation by organizing participants in a fat-tree overlay: when the maximum
number of connections of a tree node is reached, the new participants connect
to the node's children. Our design quickly scales when a large number of
participants join in a short amount of time, by relying on a novel scheme that
only requires local information to route connection messages: the destination
is derived from the hash value of the combined identifiers of the message's
source and of the node that is holding the message. The scheme provides
deterministic routing of a sequence of connection messages from a single source
and probabilistic balancing of newer connections among the leaves. We show that
this design puts at least 83% of nodes at the same depth as a deterministic
algorithm, can connect a thousand browser windows in 21-55 seconds in a local
network, and can be deployed for volunteer computing to tap into 320 cores in
less than 30 seconds on a local network to increase the total throughput on the
Collatz application by two orders of magnitude compared to a single core
Smart Sampling for Lightweight Verification of Markov Decision Processes
Markov decision processes (MDP) are useful to model optimisation problems in
concurrent systems. To verify MDPs with efficient Monte Carlo techniques
requires that their nondeterminism be resolved by a scheduler. Recent work has
introduced the elements of lightweight techniques to sample directly from
scheduler space, but finding optimal schedulers by simple sampling may be
inefficient. Here we describe "smart" sampling algorithms that can make
substantial improvements in performance.Comment: IEEE conference style, 11 pages, 5 algorithms, 11 figures, 1 tabl
The complexity of resolving conflicts on MAC
We consider the fundamental problem of multiple stations competing to
transmit on a multiple access channel (MAC). We are given stations out of
which at most are active and intend to transmit a message to other stations
using MAC. All stations are assumed to be synchronized according to a time
clock. If stations node transmit in the same round, then the MAC provides
the feedback whether , (collision occurred) or . When ,
then a single station is indeed able to successfully transmit a message, which
is received by all other nodes. For the above problem the active stations have
to schedule their transmissions so that they can singly, transmit their
messages on MAC, based only on the feedback received from the MAC in previous
round.
For the above problem it was shown in [Greenberg, Winograd, {\em A Lower
bound on the Time Needed in the Worst Case to Resolve Conflicts
Deterministically in Multiple Access Channels}, Journal of ACM 1985] that every
deterministic adaptive algorithm should take rounds
in the worst case. The fastest known deterministic adaptive algorithm requires
rounds. The gap between the upper and lower bound is
round. It is substantial for most values of : When constant and (for any constant , the lower bound is
respectively and O(n), which is trivial in both cases. Nevertheless,
the above lower bound is interesting indeed when poly(). In this
work, we present a novel counting argument to prove a tight lower bound of
rounds for all deterministic, adaptive algorithms, closing
this long standing open question.}Comment: Xerox internal report 27th July; 7 page
- âŠ