1,088 research outputs found

    Self-stabilizing TDMA Algorithms for Wireless Ad-hoc Networks without External Reference

    Full text link
    Time division multiple access (TDMA) is a method for sharing communication media. In wireless communications, TDMA algorithms often divide the radio time into timeslots of uniform size, ξ\xi, and then combine them into frames of uniform size, τ\tau. We consider TDMA algorithms that allocate at least one timeslot in every frame to every node. Given a maximal node degree, δ\delta, and no access to external references for collision detection, time or position, we consider the problem of collision-free self-stabilizing TDMA algorithms that use constant frame size. We demonstrate that this problem has no solution when the frame size is τ<max{2δ,χ2}\tau < \max\{2\delta,\chi_2\}, where χ2\chi_2 is the chromatic number for distance-22 vertex coloring. As a complement to this lower bound, we focus on proving the existence of collision-free self-stabilizing TDMA algorithms that use constant frame size of τ\tau. We consider basic settings (no hardware support for collision detection and no prior clock synchronization), and the collision of concurrent transmissions from transmitters that are at most two hops apart. In the context of self-stabilizing systems that have no external reference, we are the first to study this problem (to the best of our knowledge), and use simulations to show convergence even with computation time uncertainties

    Separation of Circulating Tokens

    Full text link
    Self-stabilizing distributed control is often modeled by token abstractions. A system with a single token may implement mutual exclusion; a system with multiple tokens may ensure that immediate neighbors do not simultaneously enjoy a privilege. For a cyber-physical system, tokens may represent physical objects whose movement is controlled. The problem studied in this paper is to ensure that a synchronous system with m circulating tokens has at least d distance between tokens. This problem is first considered in a ring where d is given whilst m and the ring size n are unknown. The protocol solving this problem can be uniform, with all processes running the same program, or it can be non-uniform, with some processes acting only as token relays. The protocol for this first problem is simple, and can be expressed with Petri net formalism. A second problem is to maximize d when m is given, and n is unknown. For the second problem, the paper presents a non-uniform protocol with a single corrective process.Comment: 22 pages, 7 figures, epsf and pstricks in LaTe

    Gossip Algorithms for Distributed Signal Processing

    Full text link
    Gossip algorithms are attractive for in-network processing in sensor networks because they do not require any specialized routing, there is no bottleneck or single point of failure, and they are robust to unreliable wireless network conditions. Recently, there has been a surge of activity in the computer science, control, signal processing, and information theory communities, developing faster and more robust gossip algorithms and deriving theoretical performance guarantees. This article presents an overview of recent work in the area. We describe convergence rate results, which are related to the number of transmitted messages and thus the amount of energy consumed in the network for gossiping. We discuss issues related to gossiping over wireless links, including the effects of quantization and noise, and we illustrate the use of gossip algorithms for canonical signal processing tasks including distributed estimation, source localization, and compression.Comment: Submitted to Proceedings of the IEEE, 29 page

    Deep Reinforcement Learning for Event-Triggered Control

    Full text link
    Event-triggered control (ETC) methods can achieve high-performance control with a significantly lower number of samples compared to usual, time-triggered methods. These frameworks are often based on a mathematical model of the system and specific designs of controller and event trigger. In this paper, we show how deep reinforcement learning (DRL) algorithms can be leveraged to simultaneously learn control and communication behavior from scratch, and present a DRL approach that is particularly suitable for ETC. To our knowledge, this is the first work to apply DRL to ETC. We validate the approach on multiple control tasks and compare it to model-based event-triggering frameworks. In particular, we demonstrate that it can, other than many model-based ETC designs, be straightforwardly applied to nonlinear systems

    Asynchronous neighborhood task synchronization

    Full text link
    Faults are likely to occur in distributed systems. The motivation for designing self-stabilizing system is to be able to automatically recover from a faulty state. As per Dijkstra\u27s definition, a system is self-stabilizing if it converges to a desired state from an arbitrary state in a finite number of steps. The paradigm of self-stabilization is considered to be the most unified approach to designing fault-tolerant systems. Any type of faults, e.g., transient, process crashes and restart, link failures and recoveries, and byzantine faults, can be handled by a self-stabilizing system; Many applications in distributed systems involve multiple phases. Solving these applications require some degree of synchronization of phases. In this thesis research, we introduce a new problem, called asynchronous neighborhood task synchronization ( NTS ). In this problem, processes execute infinite instances of tasks, where a task consists of a set of steps. There are several requirements for this problem. Simultaneous execution of steps by the neighbors is allowed only if the steps are different. Every neighborhood is synchronized in the sense that all neighboring processes execute the same instance of a task. Although the NTS problem is applicable in nonfaulty environments, it is more challenging to solve this problem considering various types of faults. In this research, we will present a self-stabilizing solution to the NTS problem. The proposed solution is space optimal, fault containing, fully localized, and fully distributed. One of the most desirable properties of our algorithm is that it works under any (including unfair) daemon. We will discuss various applications of the NTS problem
    corecore