202,328 research outputs found

    A New View on Worst-Case to Average-Case Reductions for NP Problems

    Full text link
    We study the result by Bogdanov and Trevisan (FOCS, 2003), who show that under reasonable assumptions, there is no non-adaptive worst-case to average-case reduction that bases the average-case hardness of an NP-problem on the worst-case complexity of an NP-complete problem. We replace the hiding and the heavy samples protocol in [BT03] by employing the histogram verification protocol of Haitner, Mahmoody and Xiao (CCC, 2010), which proves to be very useful in this context. Once the histogram is verified, our hiding protocol is directly public-coin, whereas the intuition behind the original protocol inherently relies on private coins

    TURTLE: Four Weddings and a Tutorial

    Get PDF
    The paper discusses an educational case study of protocol modelling in TURTLE, a real-time UML profile supported by the open source toolkit TTool. The method associated with TURTLE is step by step illustrated with the connection set up and handover procedures defined for the Future Air navigation Systems. The paper covers the following methodological stages: requirement modeling, use-case driven and scenario based analysis, object-oriented design and rapid prototyping in Java. Emphasis is laid on the formal verification of analysis and design diagrams

    Parameterized Verification of Algorithms for Oblivious Robots on a Ring

    Full text link
    We study verification problems for autonomous swarms of mobile robots that self-organize and cooperate to solve global objectives. In particular, we focus in this paper on the model proposed by Suzuki and Yamashita of anonymous robots evolving in a discrete space with a finite number of locations (here, a ring). A large number of algorithms have been proposed working for rings whose size is not a priori fixed and can be hence considered as a parameter. Handmade correctness proofs of these algorithms have been shown to be error-prone, and recent attention had been given to the application of formal methods to automatically prove those. Our work is the first to study the verification problem of such algorithms in the parameter-ized case. We show that safety and reachability problems are undecidable for robots evolving asynchronously. On the positive side, we show that safety properties are decidable in the synchronous case, as well as in the asynchronous case for a particular class of algorithms. Several properties on the protocol can be decided as well. Decision procedures rely on an encoding in Presburger arithmetics formulae that can be verified by an SMT-solver. Feasibility of our approach is demonstrated by the encoding of several case studies

    Formal verification of safety protocol in train control system

    Get PDF
    In order to satisfy the safety-critical requirements, the train control system (TCS) often employs a layered safety communication protocol to provide reliable services. However, both description and verification of the safety protocols may be formidable due to the system complexity. In this paper, interface automata (IA) are used to describe the safety service interface behaviors of safety communication protocol. A formal verification method is proposed to describe the safety communication protocols using IA and translate IA model into PROMELA model so that the protocols can be verified by the model checker SPIN. A case study of using this method to describe and verify a safety communication protocol is included. The verification results illustrate that the proposed method is effective to describe the safety protocols and verify deadlocks, livelocks and several mandatory consistency properties. A prototype of safety protocols is also developed based on the presented formally verifying method

    A Domain Specific Language Based Approach for Generating Deadlock-Free Parallel Load Scheduling Protocols for Distributed Systems

    Get PDF
    In this dissertation, the concept of using domain specific language to develop errorree parallel asynchronous load scheduling protocols for distributed systems is studied. The motivation of this study is rooted in addressing the high cost of verifying parallel asynchronous load scheduling protocols. Asynchronous parallel applications are prone to subtle bugs such as deadlocks and race conditions due to the possibility of non-determinism. Due to this non-deterministic behavior, traditional testing methods are less effective at finding software faults. One approach that can eliminate these software bugs is to employ model checking techniques that can verify that non-determinism will not cause software faults in parallel programs. Unfortunately, model checking requires the development of a verification model of a program in a separate verification language which can be an error-prone procedure and may not properly represent the semantics of the original system. The model checking approach can provide true positive result if the semantics of an implementation code and a verification model is represented under a single framework such that the verification model closely represents the implementation and the automation of a verification process is natural. In this dissertation, a domain specific language based verification framework is developed to design parallel load scheduling protocols and automatically verify their behavioral properties through model checking. A specification language, LBDSL, is introduced that facilitates the development of parallel load scheduling protocols. The LBDSL verification framework uses model checking techniques to verify the asynchronous behavior of the protocol. It allows the same protocol specification to be used for verification and the code generation. The support to automatic verification during protocol development reduces the verification cost post development. The applicability of LBDSL verification framework is illustrated by performing case study on three different types of load scheduling protocols. The study shows that the LBDSL based verification approach removes the need of debugging for deadlocks and race bugs which has potential to significantly lower software development costs

    Design and Analysis of Transport Protocols for Reliable High-Speed Communications

    Get PDF
    The design and analysis of transport protocols for reliable communications constitutes the topic of this dissertation. These transport protocols guarantee the sequenced and complete delivery of user data over networks which may lose, duplicate and reorder packets. Reliable transport services are required by a wide range of applications such as the World-Wide Web, remote network access, and distributed computing. The design of these protocols is heavily influenced by the parameters of the underlying network infrastructure and by the assumptions about the host computers and applications. Therefore the recent advances in optical transmission and computer technologies stimulated the design of several novel transport protocols. Many of the proposed protocols use similar or at least related techniques. Our goal with this thesis is to improve the understanding of reliable communications by analyzing the protocols that implement this service and to contribute to the design of reliable transport protocols. The basis of our analysis is the formal specification and verification of the protocol mechanisms under investigation. The behavior of the protocol is captured by a state-transition system and properties are established using assertional reasoning. The framework is capable to handle unbounded and modulo-N state variables and to capture real-time aspects of the protocols which is essential for the modeling of realistic systems. Practical protocols of considerable complexity are specified and verified in the thesis. One advantage of the formal verification is that it increases our confidence in the correctness of these protocols. The formalism forces us to clarify all the details of the working of the protocol and to state explicitly every assumption about the protocol and its environment. During the process of the verification one also gains insight into the mechanisms of the protocol. But probably the most important result is that during the verication we obtain conditions for the correctness of the protocol in the form of inequalities on some protocol parameters. These conditions allow the comparison of the different protocol mechanisms and can be used to judge the suitability of a protocol for a certain environment. The functionality of transport protocols can be naturally divided into data transfer and connection management. Data transfer deals with the sequenced delivery of user data, while connection management is concerned with the orderly setup and release of connections.\ud In the thesis we study three different data transfer protocols. The usage of timestamps in data transfer protocols is analyzed in detail through the example of the PAWS mechanism which was proposed as an extension to TCP. The analysis reveals that the use of timestamps increases the functionality of the transport protocol by facilitating the simple measurement of round-trip delays, but it also reduces the maximum allowable transmission rate as compared to the plain sliding-window protocol. Another data transfer protocol called SNR is analyzed which is based on the idea of periodic state exchange. We start from an earlier specification of SNR and compare it to the plain sliding-window protocol. The analysis reveals that the maximum transmission speed achievable by that SNR specification is higher than that of the plain sliding-window protocol, but it comes with a serious limitation. In the SNR specication it is assumed that no duplicates are generated by either the network or the transport protocol itself. This assumption may seriously limit the eective performance of the protocol in case of losses in the network and demonstrates the importance of considering all the assumptions when selecting a protocol for a certain environment. The use of timestamps is also investigated in the context of connection management protocols. The detailed analysis of the connection setup protocol SCMP is presented which is based on the assumption that clocks of computers can be synchronized relatively cheaply even in a large network. In our verification it is proven that the safety of the protocol does not depend of the synchronization assumption, therefore the protocol can be used safely in cases when there are no absolute guarantees of the clocks being synchronized. Since practical clock synchronization algorithms give only probabilistic guarantees, our result provides an important theoretical support of the applicability of the protocol in practical environments. Based on earlier work by others, a family of connection management protocols is analyzed that use a cache to store information needed to shorten the connection setup latency. We contribute to this work by proposing improvements which allow to reduce considerably the memory usage of these protocols. Furthermore, we show that the correctness of the protocol can be assured without assuming an upper bound on the incarnation lifetime, i.e., the maximum duration of a connection. This result greatly improves the practical applicability of the protocol

    On formalising and analysing the tweetchain protocol

    Get PDF
    Distributed Ledger Technology is demonstrating its capability to provide flexible frameworks for information assurance capable of resisting to byzantine failures and multiple target attacks. The availability of development frameworks allows the definition of many applications using such a technology. On the contrary, the verification of such applications are far from being easy since testing is not enough to guarantee the absence of security problems. The paper describes an experience in the modelling and security analysis of one of these applications by means of formal methods: in particular, we consider the Tweetchain protocol as a case study and we use the Tamarin Prover tool, which supports the modelling of a protocol as a multiset rewriting system and its analysis with respect to temporal first-order properties. With the aim of making the modeling and verification process reproducible and independent of the specific protocol, we present a general structure of the Tamarin Prover model and of the properties to verified. Finally, we discuss the strengths and limitations of the Tamarin Prover approach considering three aspects: modelling, analysis and the verification process. Copyrigh

    Utilization of timed automata as a verification tool for real-time security protocols

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 2010Includes bibliographical references (leaves: 85-92)Text in English; Abstract: Turkish and Englishxi, 92 leavesTimed Automata is an extension to the automata-theoretic approach to the modeling of real time systems that introduces time into the classical automata. Since it has been first proposed by Alur and Dill in the early nineties, it has become an important research area and been widely studied in both the context of formal languages and modeling and verification of real time systems. Timed automata use dense time modeling, allowing efficient model checking of time-sensitive systems whose correct functioning depend on the timing properties. One of these application areas is the verification of security protocols. This thesis aims to study the timed automata model and utilize it as a verification tool for security protocols. As a case study, the Neuman-Stubblebine Repeated Authentication Protocol is modeled and verified employing the time-sensitive properties in the model. The flaws of the protocol are analyzed and it is commented on the benefits and challenges of the model
    • …
    corecore