40 research outputs found

    26th Annual Computational Neuroscience Meeting (CNS*2017): Part 3 - Meeting Abstracts - Antwerp, Belgium. 15–20 July 2017

    Get PDF
    This work was produced as part of the activities of FAPESP Research,\ud Disseminations and Innovation Center for Neuromathematics (grant\ud 2013/07699-0, S. Paulo Research Foundation). NLK is supported by a\ud FAPESP postdoctoral fellowship (grant 2016/03855-5). ACR is partially\ud supported by a CNPq fellowship (grant 306251/2014-0)

    Byzantine Failures and Security: Arbitrary is not (always) Random

    No full text
    The Byzantine failure model allows arbitrary behavior of a certain fraction of network nodes in a distributed system. It was introduced to model and analyze the effects of very severe hardware faults in aircraft control systems. Lately, the Byzantine failure model has been used in the area of network security where Byzantine-tolerance is equated with resilience against malicious attackers. We discuss two reasons why one should be careful in doing so. Firstly, Byzantine-tolerance is not concerned with secrecy and so special means have to be employed if secrecy is a desired system property. Secondly, in contrast to the domain of hardware faults, in a security setting it is difficult to compute the assumption coverage of the Byzantine failure model, i.e., the probability that the failure assumption holds in practice. To address this latter point we develop a methodology which allows to estimate the reliability of a Byzantine-tolerant solution exposed to attackers of different strengths

    Transformational approaches to the specification and verification of fault-tolerant systems: Formal Background and classification

    No full text
    Proving that a program suits its specification and thus can be called correct has been a research subject for many years resulting in a wide range of methods and formalisms. However, it is a common experience that even systems which have been proven correct can fail due to physical faults occurring in the system. As computer programs control an increasing part of todays critical infrastructure, the notion of correctness has been extended to fault tolerance, meaning correctness in the presence of a certain amount of faulty behavior of the environment. Formalisms to verify fault-tolerant systems must model faults and faulty behavior in some form or another. Common ways to do this are based on a notion of transformation either at the program or the specification level. We survey the wide range of formal methods to verify fault-tolerant systems which are based on some form of transformation. Our aim is to classify these methods, relate them to one another and, thus, structure the area. We hope that this might faciliate the involvement of researchers into this interesting field of computer science

    An exercise in systematically deriving fault-tolerance specifications

    No full text
    To rigorously prove that a system is correct under normal system operation requires a formal correctness specification. In the context of fault tolerance, correctness means that a system must be correct even if some specified faults occur. The correctness conditions in the former and in the latter case are however not necessarily the same. This is because correctness specifications for fault tolerance must often take the behavior of faulty components into account. In this paper we perform a case study on the interrelations between problem specifications in ideal environments and in faulty ones. The problem considered is that of consensus and the failure model used is crash. The goal of this research is to uncover the influences that specific failure models have on problem specifications so that fault-tolerance specifications can be systematically derived. As this is work in progress, the ideas herein are partly half-baked and deserve some additional discussion

    Specifications for Fault Tolerance: A Comedy of Failures

    No full text
    A substantial difficulty in rigorously reasoning about fault tolerant distributed algorithms is the necessity to formally describe faulty behavior. In this paper, we present a unified and formal approach to specify such behavior. It is based on the observation that faulty behavior can be regarded as a special form of (programmable) system behavior. Consequently, a failure model is defined to be a program transformation which can be used to evaluate the correctness properties of fault tolerant algorithms. We re-formulate several failure models which are pervasive in the literature in terms of our approach and show some interesting relations between them. In order to show the feasibility of this approach, we apply our methodology to the problem of reliable broadcast. Categories and Subject Descriptors: C.4 [Performance of Systems]: Fault tolerance; modeling techniques; F.3.1 [Specifying and Verifying and Reasoning about Programs ]: mechanical verification; specification techniques Gener..

    A Survey of Self-Stabilizing Spanning-Tree Construction Algorithms

    No full text
    Self-stabilizing systems can automatically recover from arbitrary state perturbations in finite time. They are therefore well-suited for dynamic, failure prone environments. Spanning-tree construction in distributed systems is a fundamental task which forms the basis for many other network algorithms (like token circulation or routing). This paper surveys self-stabilizing algorithms that construct a spanning tree within a network of processing entities. Lower bounds and related work are also discussed
    corecore