1,512 research outputs found

    Tree Codes Improve Convergence Rate of Consensus Over Erasure Channels

    Get PDF
    We study the problem of achieving average consensus between a group of agents over a network with erasure links. In the context of consensus problems, the unreliability of communication links between nodes has been traditionally modeled by allowing the underlying graph to vary with time. In other words, depending on the realization of the link erasures, the underlying graph at each time instant is assumed to be a subgraph of the original graph. Implicit in this model is the assumption that the erasures are symmetric: if at time t the packet from node i to node j is dropped, the same is true for the packet transmitted from node j to node i. However, in practical wireless communication systems this assumption is unreasonable and, due to the lack of symmetry, standard averaging protocols cannot guarantee that the network will reach consensus to the true average. In this paper we explore the use of channel coding to improve the performance of consensus algorithms. For symmetric erasures, we show that, for certain ranges of the system parameters, repetition codes can speed up the convergence rate. For asymmetric erasures we show that tree codes (which have recently been designed for erasure channels) can be used to simulate the performance of the original "unerased" graph. Thus, unlike conventional consensus methods, we can guarantee convergence to the average in the asymmetric case. The price is a slowdown in the convergence rate, relative to the unerased network, which is still often faster than the convergence rate of conventional consensus algorithms over noisy links

    Distributed averaging on digital erasure networks

    Get PDF
    International audience; Iterative distributed algorithms are studied for computing arithmetic averages over networks of agents connected through memoryless broadcast erasure channels. These algorithms do not require the agents to have any knowledge about the global network structure or size. Almost sure convergence to state agreement is proved, and the communication and computational complexities of the algorithms are analyzed. Both the number of transmissions and the number of computations performed by each agent of the network are shown to grow not faster than poly-logarithmically in the desired precision. The impact of the graph topology on the algorithms' performance is analyzed as well. Moreover, it is shown how, in the presence of noiseless communication feedback, one can modify the algorithms, significantly improving their performance versus complexity trade-off

    FeedNetBack - D03.02 - Control Subject to Transmission Constraints, With Transmission Errors

    Get PDF
    This is a Deliverable Report for the FeedNetBack project (www.feednetback.eu). It describes the research performed within Work Package 3, Task 3.2 (Control Subject to Transmission Constraints, with Transmission Errors), in the first 36 months of the project. It targets the issue of control subject to transmission constraints with transmission error. This research concerns problems arising from the presence of a noisy communication channel (specified and modeled at the physical layer) within the control loop. The resulting constraints include finite capacities in the transmission of the sensor and/or actuator signals and transmission errors. Our focus is on designing new compression and coding techniques to support networked control in this scenario. This Deliverable extends the analysis provided in the companion Deliverable D03.01, to deal with the effects of noise in communication channel. The quantization schemes described in D03.01, in particular the adaptive ones, might be very sensitive to the presence of even a few errors. Indeed error-correction coding for estimation or control purposes cannot simply exploit classical coding theory and practice, where vanishing error probability is obtained only in the limit of infinite block-length. A first contribution reported in this Deliverable is the construction of families of codes having the any-time property required in this setting, and the analysis of the trade-off between code complexity and performance. Our results consider the binary erasure channel, and can be extended to more general binary-input output-symmetric memoryless channels. The second and third contributions reported in this deliverable deal with the problem of remotely stabilizing linear time invariant (LTI) systems over Gaussian channels. Specifically, in the second contribution we consider a single LTI system which has to be stabilized by remote controller using a network of sensors having average transmit power constraints. We study basic sensor network topologies and provide necessary and sufficient conditions for mean square stabilization. Then in the third contribution, we extend our study to two LTI systems which are to be simultaneously stabilized. In this regard, we study the interesting setups of joint and separate sensing and control. By joint sensing we mean that there exists a common sensor node to simultaneously transmit the sensed state processes of the two plants and by joint control we mean that there is a common controller for both plants. We name these setups as: i) control over multiple-access channel (separate sensors, joint controller setup), ii) control over broadcast channel (common sensor, separate controllers setup), and iii) control over interference channel (separate sensors, separate controllers). We propose to use delay-free linear schemes for these setups and thus obtain sufficient conditions for mean square stabilization. Then, we discuss the joint design of the encoder and the controller. We propose an iterative design procedure for a joint design of the sensor measurement quantization, channel error protection, and controller actuation, with the objective to minimize the expected linear quadratic cost over a finite horizon. Finally, the same as for the noiseless case, we address the issues that arise when not only one plant and one controller are communicating through a channel, but there is a whole network of sensors and actuators. We consider the effects of digital noisy channels on the consensus algorithm, and we present an algorithm which exploits the any-time codes discussed above

    Broadcast Coded Slotted ALOHA: A Finite Frame Length Analysis

    Full text link
    We propose an uncoordinated medium access control (MAC) protocol, called all-to-all broadcast coded slotted ALOHA (B-CSA) for reliable all-to-all broadcast with strict latency constraints. In B-CSA, each user acts as both transmitter and receiver in a half-duplex mode. The half-duplex mode gives rise to a double unequal error protection (DUEP) phenomenon: the more a user repeats its packet, the higher the probability that this packet is decoded by other users, but the lower the probability for this user to decode packets from others. We analyze the performance of B-CSA over the packet erasure channel for a finite frame length. In particular, we provide a general analysis of stopping sets for B-CSA and derive an analytical approximation of the performance in the error floor (EF) region, which captures the DUEP feature of B-CSA. Simulation results reveal that the proposed approximation predicts very well the performance of B-CSA in the EF region. Finally, we consider the application of B-CSA to vehicular communications and compare its performance with that of carrier sense multiple access (CSMA), the current MAC protocol in vehicular networks. The results show that B-CSA is able to support a much larger number of users than CSMA with the same reliability.Comment: arXiv admin note: text overlap with arXiv:1501.0338

    A Tutorial on Clique Problems in Communications and Signal Processing

    Full text link
    Since its first use by Euler on the problem of the seven bridges of K\"onigsberg, graph theory has shown excellent abilities in solving and unveiling the properties of multiple discrete optimization problems. The study of the structure of some integer programs reveals equivalence with graph theory problems making a large body of the literature readily available for solving and characterizing the complexity of these problems. This tutorial presents a framework for utilizing a particular graph theory problem, known as the clique problem, for solving communications and signal processing problems. In particular, the paper aims to illustrate the structural properties of integer programs that can be formulated as clique problems through multiple examples in communications and signal processing. To that end, the first part of the tutorial provides various optimal and heuristic solutions for the maximum clique, maximum weight clique, and kk-clique problems. The tutorial, further, illustrates the use of the clique formulation through numerous contemporary examples in communications and signal processing, mainly in maximum access for non-orthogonal multiple access networks, throughput maximization using index and instantly decodable network coding, collision-free radio frequency identification networks, and resource allocation in cloud-radio access networks. Finally, the tutorial sheds light on the recent advances of such applications, and provides technical insights on ways of dealing with mixed discrete-continuous optimization problems

    Gossip Algorithms for Distributed Signal Processing

    Full text link
    Gossip algorithms are attractive for in-network processing in sensor networks because they do not require any specialized routing, there is no bottleneck or single point of failure, and they are robust to unreliable wireless network conditions. Recently, there has been a surge of activity in the computer science, control, signal processing, and information theory communities, developing faster and more robust gossip algorithms and deriving theoretical performance guarantees. This article presents an overview of recent work in the area. We describe convergence rate results, which are related to the number of transmitted messages and thus the amount of energy consumed in the network for gossiping. We discuss issues related to gossiping over wireless links, including the effects of quantization and noise, and we illustrate the use of gossip algorithms for canonical signal processing tasks including distributed estimation, source localization, and compression.Comment: Submitted to Proceedings of the IEEE, 29 page

    Compressive sensing based imaging via belief propagation

    Get PDF
    Multiple description coding (MDC) using Compressive Sensing (CS) mainly aims at restoring an image from a small subset of samples with reasonable accuracy using an iterative message passing decoding algorithm commonly known as Belief Propagation (BP). The CS technique can accurately recover any compressible or sparse signal from a lesser number of non-adaptive, randomized linear projection samples than that specified by the Nyquist rate. In this work, we demonstrate how CS-based encoding generates measurements from the sparse image signal and the measurement matrix. Then we demonstrate how a BP decoding algorithm reconstructs the image from the measurements generated. In our work, the CS-BP algorithm assumes that all the unknown variables have the same prior distribution as we do not have any knowledge of the side information available during the initiation of the decoding process. Thus, we prove that this algorithm is effective even in the absence of side information

    Information theoretic bounds for distributed computation

    Get PDF
    Includes bibliographical references (p. 101-103).Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.(cont.) In the second formulation, each node has an initial real-valued measurement. Nodes communicate their values via a network with fixed topology and noisy channels between nodes that are linked. The goal is for each node to estimate a given function of all the initial values in the network, so that the mean square error in the estimate is within a prescribed interval. Here, the nodes do not know the distribution of the source, but have unlimited computation power to run whatever algorithm needed to ensure the mean square error criterion. The question is: how does the communication network impact the time until the performance criterion is guaranteed. Using Information Theoretic inequalities, I derive an algorithm-independent lower bound on the computation time. The bound is a function of the uncertainty in the function to be estimated, via its differential entropy, and the desired accuracy level, as specified by the mean square error criterion. Next, I demonstrate the use of this bound in a scenario where nodes communicate through erasure channels to learn a linear function of all the node's initial values. For this scenario, I describe an algorithm whose running time, until with high probability all nodes' estimates lie within a prescribed interval of the true value, is reciprocally related to the "conductance." Conductance quantifies the information flow "bottle-neck" in the network and hence captures the effect of the topology and capacities. Using the lower bound, I show that the running time of any algorithm that guarantees the aforementioned probability criterion, must scale reciprocally with conductance. Thus, the lower bound is tight in capturing the effect of network topology via conductance; conversely, the running time of our algorithm is optimal with respect to its dependence on conductance.In this thesis, I explore via two formulations the impact of communication constraints on distributed computation. In both formulations, nodes make partial observations of an underlying source. They communicate in order to compute a given function of all the measurements in the network, to within a desired level of error. Such computation in networks arises in various contexts, like wireless and sensor networks, consensus and belief propagation with bit constraints, and estimation of a slowly evolving process. By utilizing Information Theoretic formulations and tools, I obtain code- or algorithm-independent lower bounds that capture fundamental limits imposed by the communication network. In the first formulation, each node samples a component of a source whose values belong to a field of order q. The nodes utilize their knowledge of the joint probability mass function of the components together with the function to be computed to efficiently compress their messages, which are then broadcast. The question is: how many bits per sample are necessary and sufficient for each node to broadcast in order for the probability of decoding error to approach zero as the number of samples grows. I find that when there are two nodes in the network seeking to compute the sample-wise modulo-q sum of their measurements, a node compressing so that the other can compute the modulo-q sum is no more efficient than its compressing so that the actual data sequence is decoded. However, when there are more than two nodes, we demonstrate that there exists a joint probability mass function for which nodes can more efficiently compress so that the modulo-q sum is decoded with probability of error asymptotically approaching zero. It is both necessary and sufficient for nodes to send a smaller number of bits per sample than they would have to in order for all nodes to acquire all the data sequences in the network.by Ola Ayaso.Ph.D
    • …
    corecore