12 research outputs found

    Self-Stabilizing Repeated Balls-into-Bins

    Full text link
    We study the following synchronous process that we call "repeated balls-into-bins". The process is started by assigning nn balls to nn bins in an arbitrary way. In every subsequent round, from each non-empty bin one ball is chosen according to some fixed strategy (random, FIFO, etc), and re-assigned to one of the nn bins uniformly at random. We define a configuration "legitimate" if its maximum load is O(logn)\mathcal{O}(\log n). We prove that, starting from any configuration, the process will converge to a legitimate configuration in linear time and then it will only take on legitimate configurations over a period of length bounded by any polynomial in nn, with high probability (w.h.p.). This implies that the process is self-stabilizing and that every ball traverses all bins in O(nlog2n)\mathcal{O}(n \log^2 n) rounds, w.h.p

    Hijack Vertical Federated Learning Models As One Party

    Full text link
    Vertical federated learning (VFL) is an emerging paradigm that enables collaborators to build machine learning models together in a distributed fashion. In general, these parties have a group of users in common but own different features. Existing VFL frameworks use cryptographic techniques to provide data privacy and security guarantees, leading to a line of works studying computing efficiency and fast implementation. However, the security of VFL's model remains underexplored.Comment: https://doi.ieeecomputersociety.org/10.1109/TDSC.2024.335808

    Proving the Herman-Protocol Conjecture

    Get PDF
    Herman's self-stabilisation algorithm, introduced 25 years ago, is a well-studied synchronous randomised protocol for enabling a ring of N processes collectively holding any odd number of tokens to reach a stable state in which a single token remains. Determining the worst-case expected time to stabilisation is the central outstanding open problem about this protocol. It is known that there is a constant h such that any initial configuration has expected stabilisation time at most hN2. Ten years ago, McIver and Morgan established a lower bound of 4/27?0.148 for h, achieved with three equally-spaced tokens, and conjectured this to be the optimal value of h. A series of papers over the last decade gradually reduced the upper bound on h, with the present record (achieved in 2014) standing at approximately 0.156. In this paper, we prove McIver and Morgan's conjecture and establish that h=4/27 is indeed optimal

    Self-stabilizing distributed algorithms for graph alliances

    Full text link
    Graph alliances are recently developed global prop-erties of any symmetric graph. Our purpose in the present paper is to design self-stabilizing fault tolerant distributed algorithms for the global offensive and the global defensive alliance in a given arbitrary graph. We also provide complete analysis of the convergence time of both the algorithms. 1

    Asynchronous neighborhood task synchronization

    Full text link
    Faults are likely to occur in distributed systems. The motivation for designing self-stabilizing system is to be able to automatically recover from a faulty state. As per Dijkstra\u27s definition, a system is self-stabilizing if it converges to a desired state from an arbitrary state in a finite number of steps. The paradigm of self-stabilization is considered to be the most unified approach to designing fault-tolerant systems. Any type of faults, e.g., transient, process crashes and restart, link failures and recoveries, and byzantine faults, can be handled by a self-stabilizing system; Many applications in distributed systems involve multiple phases. Solving these applications require some degree of synchronization of phases. In this thesis research, we introduce a new problem, called asynchronous neighborhood task synchronization ( NTS ). In this problem, processes execute infinite instances of tasks, where a task consists of a set of steps. There are several requirements for this problem. Simultaneous execution of steps by the neighbors is allowed only if the steps are different. Every neighborhood is synchronized in the sense that all neighboring processes execute the same instance of a task. Although the NTS problem is applicable in nonfaulty environments, it is more challenging to solve this problem considering various types of faults. In this research, we will present a self-stabilizing solution to the NTS problem. The proposed solution is space optimal, fault containing, fully localized, and fully distributed. One of the most desirable properties of our algorithm is that it works under any (including unfair) daemon. We will discuss various applications of the NTS problem

    Solved problems, unsolved problems and non-problems in concurrency

    No full text

    Files as first-class objects in fault -tolerant concurrent systems

    Get PDF
    Concurrent systems are used in applications where multiple processors are needed to complete tasks within a reasonable amount of time, or where the data sets involved will not fit within the main memory of a single computer. Because of their reliance on multiple machines, such systems are proportionally more vulnerable to both hardware and software induced failures. Fault-tolerance schemes are used to recover some earlier consistent state of the system after such a failure.;One important technique used to achieve fault-tolerance is checkpointing and rollback-recovery. In this thesis, we present a method for efficiently and transparently incorporating the part of the process state contained in the file system into process checkpoints, and we show how recovery of consistent versions of the file system and processes may be done after a failure. We present the details of a prototype system which implements our method.;We show that by using the special properties of the log-structured file system, the class of programs which are amenable to checkpointing and rollback-recovery schemes can be expanded to include those that use files. We impose no a priori restriction on the types of file system operations that can be done, and we demonstrate that our scheme does not impose significant failure-free overhead on the computation

    A framework for automated concurrency verification

    Get PDF
    Reasoning systems based on Concurrent Separation Logic make verifying complex concurrent algorithms readily possible. Such algorithms contain subtle protocols of permission and resource transfer between threads; to cope with these intricacies, modern concurrent separation logics contain many moving parts and integrate many bespoke logical components. Verifying concurrent algorithms by hand consumes much time, effort, and expertise. As a result, computer-assisted verification is a fertile research topic, and fully automated verification is a popular research goal. Unfortunately, the complexity of modern concurrent separation logics makes them hard to automate, and the proliferation and fast turnover of such logics causes a downward pressure against building tools for new logics. As a result, many such logics lack tooling. This dissertation proposes Starling: a scheme for creating concurrent program logics that are automatable by construction. Starling adapts the existing Concurrent Views Framework for sound concurrent reasoning systems, overlaying a framework for reducing concurrent proof outlines to verification conditions in existing theories (such as those accepted by off-the-shelf sequential solvers). This dissertation describes Starling in a bottom-up, modular manner. First, it shows the derivation of a series of general concurrency proof rules from the Views framework. Next, it shows how one such rule leads to the Starling framework itself. From there, it outlines a series of increasingly elaborate frontends: ways of decomposing individual Hoare triples over atomic actions into verification conditions suitable for encoding into backend theories. Each frontend leads to a concurrent program logic. Finally, the dissertation presents a tool for verifying C-style concurrent proof outlines, based on one of the above frontends. It gives examples of such outlines, covering a variety of algorithms, backend solvers, and proof techniques

    Petrinetze zum Entwurf selbststabilisierender Algorithmen

    Get PDF
    Edsger W. Dijkstra prägte im Jahr 1974 den Begriff Selbststabilisierung (self-stabilization) in der Informatik. Ein System ist selbststabilisierend, wenn es von jedem denkbaren Zustand aus nach einer endlichen Anzahl von Aktionen ein stabiles Verhalten erreicht. Im Mittelpunkt dieser Arbeit steht der Entwurf selbststabilisierender Algorithmen. Wir stellen eine Petrinetz-basierte Methode zum Entwurf selbststabilisierender Algorithmen vor. Wir validieren unsere Methode an mehreren Fallstudien: Ausgehend von algorithmischen Ideen existierender Algorithmen beschreiben wir jeweils die die schrittweise Entwicklung eines neuen Algorithmus. Dazu gehört ein neuer randomisierter selbststabilisierender Algorithmus zur Leader Election in einem Ring von Prozessoren. Dieser Algorithmus ist abgeleitet aus einem publizierten Algorithmus, von dem wir hier erstmals zeigen, daß er fehlerhaft arbeitet. Wir weisen die Speicherminimalität unseres Algorithmus nach. Ein weiteres Ergebnis ist der erste Algorithmus, der ohne Time-Out-Aktionen selbststabilisierenden Tokenaustausch in asynchronen Systemen realisiert. Petrinetze bilden einen einheitlichen formalen Rahmen für die Modellierung und Verifikation dieser Algorithmen.In 1974, Edsger W. Dijkstra suggested the notion of self-stabilization. A system is self-stabilizing if regardless of the initial state it eventually reaches a stable behaviour. This thesis focuses on the design of self-stabilizing algorithms. We introduce a new Petri net based method for the design of self-stabilizing algorithms. We validate our method on several case studies. In each of the case studies, our stepwise design starts from an algorithmic idea and leads to a new self-stabilizing algorithm. One of these algorithms is a new randomized self-stabilizing algorithm for leader election in a ring of processors. This algorithm is derived from a published algorithm which we show to be incorrect. We prove that our algorithm is space-minimal. A further result is the first algorithm for token-passing in a asynchronous environment which works without time-out actions. Petri nets form a unique framework for modelling and verification of these algorithms
    corecore