34,577 research outputs found

    Design and Analysis of Transport Protocols for Reliable High-Speed Communications

    Get PDF
    The design and analysis of transport protocols for reliable communications constitutes the topic of this dissertation. These transport protocols guarantee the sequenced and complete delivery of user data over networks which may lose, duplicate and reorder packets. Reliable transport services are required by a wide range of applications such as the World-Wide Web, remote network access, and distributed computing. The design of these protocols is heavily influenced by the parameters of the underlying network infrastructure and by the assumptions about the host computers and applications. Therefore the recent advances in optical transmission and computer technologies stimulated the design of several novel transport protocols. Many of the proposed protocols use similar or at least related techniques. Our goal with this thesis is to improve the understanding of reliable communications by analyzing the protocols that implement this service and to contribute to the design of reliable transport protocols. The basis of our analysis is the formal specification and verification of the protocol mechanisms under investigation. The behavior of the protocol is captured by a state-transition system and properties are established using assertional reasoning. The framework is capable to handle unbounded and modulo-N state variables and to capture real-time aspects of the protocols which is essential for the modeling of realistic systems. Practical protocols of considerable complexity are specified and verified in the thesis. One advantage of the formal verification is that it increases our confidence in the correctness of these protocols. The formalism forces us to clarify all the details of the working of the protocol and to state explicitly every assumption about the protocol and its environment. During the process of the verification one also gains insight into the mechanisms of the protocol. But probably the most important result is that during the verication we obtain conditions for the correctness of the protocol in the form of inequalities on some protocol parameters. These conditions allow the comparison of the different protocol mechanisms and can be used to judge the suitability of a protocol for a certain environment. The functionality of transport protocols can be naturally divided into data transfer and connection management. Data transfer deals with the sequenced delivery of user data, while connection management is concerned with the orderly setup and release of connections.\ud In the thesis we study three different data transfer protocols. The usage of timestamps in data transfer protocols is analyzed in detail through the example of the PAWS mechanism which was proposed as an extension to TCP. The analysis reveals that the use of timestamps increases the functionality of the transport protocol by facilitating the simple measurement of round-trip delays, but it also reduces the maximum allowable transmission rate as compared to the plain sliding-window protocol. Another data transfer protocol called SNR is analyzed which is based on the idea of periodic state exchange. We start from an earlier specification of SNR and compare it to the plain sliding-window protocol. The analysis reveals that the maximum transmission speed achievable by that SNR specification is higher than that of the plain sliding-window protocol, but it comes with a serious limitation. In the SNR specication it is assumed that no duplicates are generated by either the network or the transport protocol itself. This assumption may seriously limit the eective performance of the protocol in case of losses in the network and demonstrates the importance of considering all the assumptions when selecting a protocol for a certain environment. The use of timestamps is also investigated in the context of connection management protocols. The detailed analysis of the connection setup protocol SCMP is presented which is based on the assumption that clocks of computers can be synchronized relatively cheaply even in a large network. In our verification it is proven that the safety of the protocol does not depend of the synchronization assumption, therefore the protocol can be used safely in cases when there are no absolute guarantees of the clocks being synchronized. Since practical clock synchronization algorithms give only probabilistic guarantees, our result provides an important theoretical support of the applicability of the protocol in practical environments. Based on earlier work by others, a family of connection management protocols is analyzed that use a cache to store information needed to shorten the connection setup latency. We contribute to this work by proposing improvements which allow to reduce considerably the memory usage of these protocols. Furthermore, we show that the correctness of the protocol can be assured without assuming an upper bound on the incarnation lifetime, i.e., the maximum duration of a connection. This result greatly improves the practical applicability of the protocol

    Model checking medium access control for sensor networks

    Get PDF
    We describe verification of S-MAC, a medium access control protocol designed for wireless sensor networks, by means of the PRISM model checker. The S-MAC protocol is built on top of the IEEE 802.11 standard for wireless ad hoc networks and, as such, it uses the same randomised backoff procedure as a means to avoid collision. In order to minimise energy consumption, in S-MAC, nodes are periodically put into a sleep state. Synchronisation of the sleeping schedules is necessary for the nodes to be able to communicate. Intuitively, energy saving obtained through a periodic sleep mechanism will be at the expense of performance. In previous work on S-MAC verification, a combination of analytical techniques and simulation has been used to confirm the correctness of this intuition for a simplified (abstract) version of the protocol in which the initial schedules coordination phase is assumed correct. We show how we have used the PRISM model checker to verify the behaviour of S-MAC and compare it to that of IEEE 802.11

    Consensus in multi-agent systems with non-periodic sampled-data exchange and uncertain network topology

    Full text link
    In this paper consensus in second-order multi-agent systems with a non-periodic sampled-data exchange among agents is investigated. The sampling is random with bounded inter-sampling intervals. It is assumed that each agent has exact knowledge of its own state at any time instant. The considered local interaction rule is PD-type. Sufficient conditions for stability of the consensus protocol to a time-invariant value are derived based on LMIs. Such conditions only require the knowledge of the connectivity of the graph modeling the network topology. Numerical simulations are presented to corroborate the theoretical results.Comment: arXiv admin note: substantial text overlap with arXiv:1407.300

    Consensus in multi-agent systems with second-order dynamics and non-periodic sampled-data exchange

    Full text link
    In this paper consensus in second-order multi-agent systems with a non-periodic sampled-data exchange among agents is investigated. The sampling is random with bounded inter-sampling intervals. It is assumed that each agent has exact knowledge of its own state at all times. The considered local interaction rule is PD-type. The characterization of the convergence properties exploits a Lyapunov-Krasovskii functional method, sufficient conditions for stability of the consensus protocol to a time-invariant value are derived. Numerical simulations are presented to corroborate the theoretical results.Comment: The 19th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA'2014), Barcelona (Spain

    Interactive Channel Capacity Revisited

    Full text link
    We provide the first capacity approaching coding schemes that robustly simulate any interactive protocol over an adversarial channel that corrupts any ϵ\epsilon fraction of the transmitted symbols. Our coding schemes achieve a communication rate of 1O(ϵloglog1/ϵ)1 - O(\sqrt{\epsilon \log \log 1/\epsilon}) over any adversarial channel. This can be improved to 1O(ϵ)1 - O(\sqrt{\epsilon}) for random, oblivious, and computationally bounded channels, or if parties have shared randomness unknown to the channel. Surprisingly, these rates exceed the 1Ω(H(ϵ))=1Ω(ϵlog1/ϵ)1 - \Omega(\sqrt{H(\epsilon)}) = 1 - \Omega(\sqrt{\epsilon \log 1/\epsilon}) interactive channel capacity bound which [Kol and Raz; STOC'13] recently proved for random errors. We conjecture 1Θ(ϵloglog1/ϵ)1 - \Theta(\sqrt{\epsilon \log \log 1/\epsilon}) and 1Θ(ϵ)1 - \Theta(\sqrt{\epsilon}) to be the optimal rates for their respective settings and therefore to capture the interactive channel capacity for random and adversarial errors. In addition to being very communication efficient, our randomized coding schemes have multiple other advantages. They are computationally efficient, extremely natural, and significantly simpler than prior (non-capacity approaching) schemes. In particular, our protocols do not employ any coding but allow the original protocol to be performed as-is, interspersed only by short exchanges of hash values. When hash values do not match, the parties backtrack. Our approach is, as we feel, by far the simplest and most natural explanation for why and how robust interactive communication in a noisy environment is possible

    Work statistics in stochastically driven systems

    Get PDF
    We identify the conditions under which a stochastic driving inducing energy changes on a system coupled to a thermal bath can be treated as a work source. When these conditions are met, the work statistics satisfies the Crooks fluctuation theorem traditionally derived for deterministic drivings. We illustrate this fact by calculating and comparing the work statistics for a two-level system driven respectively by a stochastic and a deterministic piecewise constant protocol.Comment: 19 pages, 7 figures, 2 table

    A Model-based transformation process to validate and implement high-integrity systems

    Get PDF
    Despite numerous advances, building High-Integrity Embedded systems remains a complex task. They come with strong requirements to ensure safety, schedulability or security properties; one needs to combine multiple analysis to validate each of them. Model-Based Engineering is an accepted solution to address such complexity: analytical models are derived from an abstraction of the system to be built. Yet, ensuring that all abstractions are semantically consistent, remains an issue, e.g. when performing model checking for assessing safety, and then for schedulability using timed automata, and then when generating code. Complexity stems from the high-level view of the model compared to the low-level mechanisms used. In this paper, we present our approach based on AADL and its behavioral annex to refine iteratively an architecture description. Both application and runtime components are transformed into basic AADL constructs which have a strict counterpart in classical programming languages or patterns for verification. We detail the benefits of this process to enhance analysis and code generation. This work has been integrated to the AADL-tool support OSATE2
    corecore