328 research outputs found

    Multiple-Description Coding by Dithered Delta-Sigma Quantization

    Get PDF
    We address the connection between the multiple-description (MD) problem and Delta-Sigma quantization. The inherent redundancy due to oversampling in Delta-Sigma quantization, and the simple linear-additive noise model resulting from dithered lattice quantization, allow us to construct a symmetric and time-invariant MD coding scheme. We show that the use of a noise shaping filter makes it possible to trade off central distortion for side distortion. Asymptotically as the dimension of the lattice vector quantizer and order of the noise shaping filter approach infinity, the entropy rate of the dithered Delta-Sigma quantization scheme approaches the symmetric two-channel MD rate-distortion function for a memoryless Gaussian source and MSE fidelity criterion, at any side-to-central distortion ratio and any resolution. In the optimal scheme, the infinite-order noise shaping filter must be minimum phase and have a piece-wise flat power spectrum with a single jump discontinuity. An important advantage of the proposed design is that it is symmetric in rate and distortion by construction, so the coding rates of the descriptions are identical and there is therefore no need for source splitting.Comment: Revised, restructured, significantly shortened and minor typos has been fixed. Accepted for publication in the IEEE Transactions on Information Theor

    A Unified Coded Deep Neural Network Training Strategy Based on Generalized PolyDot Codes for Matrix Multiplication

    Full text link
    This paper has two contributions. First, we propose a novel coded matrix multiplication technique called Generalized PolyDot codes that advances on existing methods for coded matrix multiplication under storage and communication constraints. This technique uses "garbage alignment," i.e., aligning computations in coded computing that are not a part of the desired output. Generalized PolyDot codes bridge between Polynomial codes and MatDot codes, trading off between recovery threshold and communication costs. Second, we demonstrate that Generalized PolyDot can be used for training large Deep Neural Networks (DNNs) on unreliable nodes prone to soft-errors. This requires us to address three additional challenges: (i) prohibitively large overhead of coding the weight matrices in each layer of the DNN at each iteration; (ii) nonlinear operations during training, which are incompatible with linear coding; and (iii) not assuming presence of an error-free master node, requiring us to architect a fully decentralized implementation without any "single point of failure." We allow all primary DNN training steps, namely, matrix multiplication, nonlinear activation, Hadamard product, and update steps as well as the encoding/decoding to be error-prone. We consider the case of mini-batch size B=1B=1, as well as B>1B>1, leveraging coded matrix-vector products, and matrix-matrix products respectively. The problem of DNN training under soft-errors also motivates an interesting, probabilistic error model under which a real number (P,Q)(P,Q) MDS code is shown to correct P−Q−1P-Q-1 errors with probability 11 as compared to ⌊P−Q2⌋\lfloor \frac{P-Q}{2} \rfloor for the more conventional, adversarial error model. We also demonstrate that our proposed strategy can provide unbounded gains in error tolerance over a competing replication strategy and a preliminary MDS-code-based strategy for both these error models.Comment: Presented in part at the IEEE International Symposium on Information Theory 2018 (Submission Date: Jan 12 2018); Currently under review at the IEEE Transactions on Information Theor

    Broadcast Coded Slotted ALOHA: A Finite Frame Length Analysis

    Full text link
    We propose an uncoordinated medium access control (MAC) protocol, called all-to-all broadcast coded slotted ALOHA (B-CSA) for reliable all-to-all broadcast with strict latency constraints. In B-CSA, each user acts as both transmitter and receiver in a half-duplex mode. The half-duplex mode gives rise to a double unequal error protection (DUEP) phenomenon: the more a user repeats its packet, the higher the probability that this packet is decoded by other users, but the lower the probability for this user to decode packets from others. We analyze the performance of B-CSA over the packet erasure channel for a finite frame length. In particular, we provide a general analysis of stopping sets for B-CSA and derive an analytical approximation of the performance in the error floor (EF) region, which captures the DUEP feature of B-CSA. Simulation results reveal that the proposed approximation predicts very well the performance of B-CSA in the EF region. Finally, we consider the application of B-CSA to vehicular communications and compare its performance with that of carrier sense multiple access (CSMA), the current MAC protocol in vehicular networks. The results show that B-CSA is able to support a much larger number of users than CSMA with the same reliability.Comment: arXiv admin note: text overlap with arXiv:1501.0338

    Soft-Decoding-Based Strategies for Relay and Interference Channels: Analysis and Achievable Rates Using LDPC Codes

    Full text link
    We provide a rigorous mathematical analysis of two communication strategies: soft decode-and-forward (soft-DF) for relay channels, and soft partial interference-cancelation (soft-IC) for interference channels. Both strategies involve soft estimation, which assists the decoding process. We consider LDPC codes, not because of their practical benefits, but because of their analytic tractability, which enables an asymptotic analysis similar to random coding methods of information theory. Unlike some works on the closely-related demodulate-and-forward, we assume non-memoryless, code-structure-aware estimation. With soft-DF, we develop {\it simultaneous density evolution} to bound the decoding error probability at the destination. This result applies to erasure relay channels. In one variant of soft-DF, the relay applies Wyner-Ziv coding to enhance its communication with the destination, borrowing from compress-and-forward. To analyze soft-IC, we adapt existing techniques for iterative multiuser detection, and focus on binary-input additive white Gaussian noise (BIAWGN) interference channels. We prove that optimal point-to-point codes are unsuitable for soft-IC, as well as for all strategies that apply partial decoding to improve upon single-user detection (SUD) and multiuser detection (MUD), including Han-Kobayashi (HK).Comment: Accepted to the IEEE Transactions on Information Theory. This is a major revision of a paper originally submitted in August 201

    On the Design of Future Communication Systems with Coded Transport, Storage, and Computing

    Get PDF
    Communication systems are experiencing a fundamental change. There are novel applications that require an increased performance not only of throughput but also latency, reliability, security, and heterogeneity support from these systems. To fulfil the requirements, future systems understand communication not only as the transport of bits but also as their storage, processing, and relation. In these systems, every network node has transport storage and computing resources that the network operator and its users can exploit through virtualisation and softwarisation of the resources. It is within this context that this work presents its results. We proposed distributed coded approaches to improve communication systems. Our results improve the reliability and latency performance of the transport of information. They also increase the reliability, flexibility, and throughput of storage applications. Furthermore, based on the lessons that coded approaches improve the transport and storage performance of communication systems, we propose a distributed coded approach for the computing of novel in-network applications such as the steering and control of cyber-physical systems. Our proposed approach can increase the reliability and latency performance of distributed in-network computing in the presence of errors, erasures, and attackers

    Capacity Approaching Coding Strategies for Machine-to-Machine Communication in IoT Networks

    Get PDF
    Radio access technologies for mobile communications are characterized by multiple access (MA) strategies. Orthogonal MA techniques were a reasonable choice for achieving good performance with single user detection. With the tremendous growth in the number of mobile users and the new internet of things (IoT) shifting paradigm, it is expected that the monthly mobile data traffic worldwide will exceed 24.3 exabytes by 2019, over 100 billion IoT connections by 2025, and the financial impact of IoT on the global economy varies in the range of 3.9 to 11.1 trillion dollars by 2025. In light of the envisaged exponential growth and new trends, one promising solution to further enhance data rates without increasing the bandwidth is by increasing the spectral efficiency of the channel. Non-orthogonal MA techniques are potential candidates for future wireless communications. The two corner points on the boundary region of the MA channel are known to be achievable by single user decoding followed by successive decoding (SD). Other points can also be achieved using time sharing or rate splitting. On the other hand, machine-to-machine (M2M) communication which is an enabling technology for the IoT, enables massive multipurpose networked devices to exchange information among themselves with minor or no human intervention. This thesis consists of three main parts. In the first part, we propose new practical encoding and joint belief propagation (BP) decoding techniques for 2-user MA erasure channel (MAEC) that achieve any rate pair close to the boundary of the capacity region without using time sharing nor rate splitting. While at the encoders, the corresponding parity check matrices are randomly built from a half-rate LDPC matrix, the joint BP decoder employs the associated Tanner graphs of the parity check matrices to iteratively recover the erasures in the received combined codewords. Specifically, the joint decoder performs two steps in each decoding iteration: 1) simultaneously and independently runs the BP decoding process at each constituent sub-graph to recover some of the common erasures, 2) update the other sub-graph with newly recovered erasures and vice versa. When the number of erasures in the received combined codewords is less than or equal to the number of parity check constraints, the decoder may successfully decode both codewords, otherwise the decoder declares decoding failure. Furthermore, we calculate the probability of decoding failure and the outage capacity. Additionally, we show how the erasure probability evolves with the number of decoding iterations and the maximum tolerable loss. Simulations show that any rate pair close to the capacity boundary is achievable without using time sharing. In the second part, we propose a new cooperative joint network and rateless coding strategy for machine-type communication (MTC) devices in the multicast settings where three or more MTC devices dynamically form a cluster to disseminate messages between themselves. Specifically, in the basic cluster, three MTC devices transmit their respective messages simultaneously to the relay in the first phase. The relay broadcasts back the combined messages to all MTC devices within the basic cluster in the second phase. Given the fact that each MTC device can remove its own message, the received signal in the second phase is reduced to the combined messages coming from the other two MTC devices. Hence, this results in exploiting the interference caused by one message on the other and therefore improving the bandwidth efficiency. Furthermore, each group of three MTC devices in vicinity can form a basic cluster for exchanging messages, and the basic scheme extends to N MTC devices. Furthermore, we propose an efficient algorithm to disseminate messages among a large number of MTC devices. Moreover, we implement the proposed scheme employing practical Raptor codes with the use of two relaying schemes, namely amplify and forward (AF) and de-noise and forward (DNF). We show that with very little processing at the relay using DNF relaying scheme, performance can be further enhanced. We also show that the proposed scheme achieves a near optimal sum rate performance. In the third part, we present a comparative study of joint channel estimation and decoding of factor graph-based codes over flat fading channels and propose a simple channel approximation scheme that performs close to the optimal technique. Specifically, when channel state information (CSI) is not available at the receiver, a simpler approach is to estimate the channel state of a group of received symbols, then use the approximated value of the channel with the received signal to compute the log likelihood ratio. Simulation results show that the proposed scheme exhibits about 0.4 dB loss compared to the optimal solution when perfect CSI is available at the receiver

    Coding Solutions for the Secure Biometric Storage Problem

    Full text link
    The paper studies the problem of securely storing biometric passwords, such as fingerprints and irises. With the help of coding theory Juels and Wattenberg derived in 1999 a scheme where similar input strings will be accepted as the same biometric. In the same time nothing could be learned from the stored data. They called their scheme a "fuzzy commitment scheme". In this paper we will revisit the solution of Juels and Wattenberg and we will provide answers to two important questions: What type of error-correcting codes should be used and what happens if biometric templates are not uniformly distributed, i.e. the biometric data come with redundancy. Answering the first question will lead us to the search for low-rate large-minimum distance error-correcting codes which come with efficient decoding algorithms up to the designed distance. In order to answer the second question we relate the rate required with a quantity connected to the "entropy" of the string, trying to estimate a sort of "capacity", if we want to see a flavor of the converse of Shannon's noisy coding theorem. Finally we deal with side-problems arising in a practical implementation and we propose a possible solution to the main one that seems to have so far prevented real life applications of the fuzzy scheme, as far as we know.Comment: the final version appeared in Proceedings Information Theory Workshop (ITW) 2010, IEEE copyrigh
    • …
    corecore