9 research outputs found

    Cooperative Transmission for a Vector Gaussian Parallel Relay Network

    Full text link
    In this paper, we consider a parallel relay network where two relays cooperatively help a source transmit to a destination. We assume the source and the destination nodes are equipped with multiple antennas. Three basic schemes and their achievable rates are studied: Decode-and-Forward (DF), Amplify-and-Forward (AF), and Compress-and-Forward (CF). For the DF scheme, the source transmits two private signals, one for each relay, where dirty paper coding (DPC) is used between the two private streams, and a common signal for both relays. The relays make efficient use of the common information to introduce a proper amount of correlation in the transmission to the destination. We show that the DF scheme achieves the capacity under certain conditions. We also show that the CF scheme is asymptotically optimal in the high relay power limit, regardless of channel ranks. It turns out that the AF scheme also achieves the asymptotic optimality but only when the relays-to-destination channel is full rank. The relative advantages of the three schemes are discussed with numerical results.Comment: 35 pages, 10 figures, submitted to IEEE Transactions on Information Theor

    Lecture Notes on Network Information Theory

    Full text link
    These lecture notes have been converted to a book titled Network Information Theory published recently by Cambridge University Press. This book provides a significantly expanded exposition of the material in the lecture notes as well as problems and bibliographic notes at the end of each chapter. The authors are currently preparing a set of slides based on the book that will be posted in the second half of 2012. More information about the book can be found at http://www.cambridge.org/9781107008731/. The previous (and obsolete) version of the lecture notes can be found at http://arxiv.org/abs/1001.3404v4/

    Outer Bounds on the CEO Problem with Privacy Constraints

    Full text link
    We investigate the rate-distortion-leakage region of the Chief Executive Officer (CEO) problem with a passive eavesdropper and privacy constraints, considering a general distortion measure. While an inner bound directly follows from the previous work, an outer bound is newly developed in this paper. To derive this bound, we introduce a new lemma tailored for analyzing privacy constraints. As a specific instance, we demonstrate that the tight bound for discrete and Gaussian sources is obtained when the eavesdropper can only observe the messages under logarithmic loss distortion. We further investigate the rate-distortion-leakage region for a scenario where the eavesdropper possesses the messages and side information under the same distortion, and provide an outer bound for this particular case. The derived outer bound differs from the inner bound by only a minor quantity that appears in the constraints associated with the privacy-leakage rates, and it becomes tight when the distortion is large.Comment: 14 pages, 4 figure

    Probing the sub-thalamic nucleus: development of bio-markers from very Local Field Potentials

    Get PDF

    Learning Non-Parametric and High-Dimensional Distributions via Information-Theoretic Methods

    Get PDF
    Learning distributions that govern generation of data and estimation of related functionals are the foundations of many classical statistical problems. In the following dissertation we intend to investigate such topics when either the hypothesized model is non-parametric or the number of free parameters in the model grows along with the sample size. Especially, we study the above scenarios for the following class of problems with the goal of obtaining minimax rate-optimal methods for learning the target distributions when the sample size is finite. Our techniques are based on information-theoretic divergences and related mutual-information based methods. (i) Estimation in compound decision and empirical Bayes settings: To estimate the data-generating distribution, one often takes the following two-step approach. In the first step the statistician estimates the distribution of the parameters, either the empirical distribution or the postulated prior, and then in the second step plugs in the estimate to approximate the target of interest. In the literature, the estimation of empirical distribution is known as the compound decision problem and the estimation of prior is known as the problem of empirical Bayes. In our work we use the method of minimum-distance estimation for approximating these distributions. Considering certain discrete data setups, we show that the minimum-distance based method provides theoretically and practically sound choices for estimation. The computational and algorithmic aspects of the estimators are also analyzed. (ii) Prediction with Markov chains: Given observations from an unknown Markov chain, we study the problem of predicting the next entry in the trajectory. Existing analysis for such a dependent setup usually centers around concentration inequalities that uses various extraneous conditions on the mixing properties. This makes it difficult to achieve results independent of such restrictions. We introduce information-theoretic techniques to bypass such issues and obtain fundamental limits for the related minimax problems. We also analyze conditions on the mixing properties that produce a parametric rate of prediction errors

    Conservation laws for coding

    Get PDF
    This work deals with coding systems based on sparse graph codes. The key issue we address is the relationship between iterative (in particular belief propagation) and maximum a posteriori decoding. We show that between the two there is a fundamental connection, which is reminiscent of the Maxwell construction in thermodynamics. The main objects we consider are EXIT-like functions. EXIT functions were originally introduced as handy tools for the design of iterative coding systems. It gradually became clear that EXIT functions possess several fundamental properties. Many of these properties, however, apply only to the erasure case. This motivates us to introduce GEXIT functions that coincide with EXIT functions over the erasure channel. In many aspects, GEXIT functions over general memoryless output-symmetric channels play the same role as EXIT functions do over the erasure channel. In particular, GEXIT functions are characterized by the general area theorem. As a first consequence, we demonstrate that in order for the rate of an ensemble of codes to approach the capacity under belief propagation decoding, the GEXIT functions of the component codes have to be matched perfectly. This statement was previously known as the matching condition for the erasure case. We then use these GEXIT functions to show that in the limit of large blocklengths a fundamental connection appears between belief propagation and maximum a posteriori decoding. A decoding algorithm, which we call Maxwell decoder, provides an operational interpretation of this relationship for the erasure case. Both the algorithm and the analysis of the decoder are the translation of the Maxwell construction from statistical mechanics to the context of probabilistic decoding. We take the first steps to extend this construction to general memoryless output-symmetric channels. More exactly, a general upper bound on the maximum a posteriori threshold for sparse graph codes is given. It is conjectured that the fundamental connection between belief propagation and maximum a posteriori decoding carries over to the general case

    From Polar to Reed-Muller Codes:Unified Scaling, Non-standard Channels, and a Proven Conjecture

    Get PDF
    The year 2016, in which I am writing these words, marks the centenary of Claude Shannon, the father of information theory. In his landmark 1948 paper "A Mathematical Theory of Communication", Shannon established the largest rate at which reliable communication is possible, and he referred to it as the channel capacity. Since then, researchers have focused on the design of practical coding schemes that could approach such a limit. The road to channel capacity has been almost 70 years long and, after many ideas, occasional detours, and some rediscoveries, it has culminated in the description of low-complexity and provably capacity-achieving coding schemes, namely, polar codes and iterative codes based on sparse graphs. However, next-generation communication systems require an unprecedented performance improvement and the number of transmission settings relevant in applications is rapidly increasing. Hence, although Shannon's limit seems finally close at hand, new challenges are just around the corner. In this thesis, we trace a road that goes from polar to Reed-Muller codes and, by doing so, we investigate three main topics: unified scaling, non-standard channels, and capacity via symmetry. First, we consider unified scaling. A coding scheme is capacity-achieving when, for any rate smaller than capacity, the error probability tends to 0 as the block length becomes increasingly larger. However, the practitioner is often interested in more specific questions such as, "How much do we need to increase the block length in order to halve the gap between rate and capacity?". We focus our analysis on polar codes and develop a unified framework to rigorously analyze the scaling of the main parameters, i.e., block length, rate, error probability, and channel quality. Furthermore, in light of the recent success of a list decoding algorithm for polar codes, we provide scaling results on the performance of list decoders. Next, we deal with non-standard channels. When we say that a coding scheme achieves capacity, we typically consider binary memoryless symmetric channels. However, practical transmission scenarios often involve more complicated settings. For example, the downlink of a cellular system is modeled as a broadcast channel, and the communication on fiber links is inherently asymmetric. We propose provably optimal low-complexity solutions for these settings. In particular, we present a polar coding scheme that achieves the best known rate region for the broadcast channel, and we describe three paradigms to achieve the capacity of asymmetric channels. To do so, we develop general coding "primitives", such as the chaining construction that has already proved to be useful in a variety of communication problems. Finally, we show how to achieve capacity via symmetry. In the early days of coding theory, a popular paradigm consisted in exploiting the structure of algebraic codes to devise practical decoding algorithms. However, proving the optimality of such coding schemes remained an elusive goal. In particular, the conjecture that Reed-Muller codes achieve capacity dates back to the 1960s. We solve this open problem by showing that Reed-Muller codes and, in general, codes with sufficient symmetry are capacity-achieving over erasure channels under optimal MAP decoding. As the proof does not rely on the precise structure of the codes, we are able to show that symmetry alone guarantees optimal performance

    ON THE RELIABILITY AND EFFICIENCY OF INFORMATION TRANSMISSION SYSTEMS

    Full text link
    174 pagesWe make notable advances on the reliability and efficiency of several different information transmission systems by providing theoretical results supported by numerical evaluations. A known stability issue in peer-to-peer networks and our solution in the form of a new peer-to-peer protocol take the center stage in the first part of this dissertation. The second part of the dissertation focuses on finding the best possible efficiency for given reliability levels in a couple of multiple-encoder and one-decoder multiterminal source-coding information transmission systems. In the language of information theory, this reads as finding the rate region for a given set of distortions on the sources to be reconstructed at the central decoder. Recent studies have suggested that the stability of peer-to-peer networks may rely on persistent peers, who dwell on the network after they obtain the entire file. It has been proven that if peers depart the peer-to-peer network immediately after they complete the pieces of the file of interest, then one piece becomes extremely rare in the network, which leads to instability. Technological developments, however, are poised to reduce the incidence of persistent peers, giving rise to a need for a protocol that guarantees stability with non-persistent peers. We propose a novel peer-to-peer protocol, the group suppression protocol, to ensure the stability of peer-to-peer networks under the scenario that all the peers adopt non-persistent behavior. Using a suitable Lyapunov potential function, the group suppression protocol is proven to be stable when the file is broken into two pieces, and detailed experiments demonstrate the stability of the protocol for arbitrary number of pieces. We define and simulate a decentralized version of this protocol for practical applications. Straightforward incorporation of the group suppression protocol into BitTorrent while retaining most of BitTorrent's core mechanisms is also presented. Subsequent simulations show that under certain assumptions, BitTorrent with the official protocol cannot escape from the missing piece syndrome, but BitTorrent with group suppression does. We start the second part of the dissertation by revisiting the quadratic Gaussian two-encoder source-coding problem, for which a Gaussian quantize-and-bin scheme, also known as the Berger-Tung scheme, is known to achieve the entire rate region. We present a new proof of the impossibility half of the rate-region optimality result that is arguably more direct. Next, we consider the quadratic Gaussian one-help-two source-coding problem with Markovity, in which three encoders separately encode the components of a memoryless vector-Gaussian source that form a Markov chain and the central decoder aims to reproduce the first and the second components in the chain subject to individual distortion constraints. For this problem, we determine that the Gaussian quantize-and-bin scheme achieves the rate region if the distortion on the second source is small enough. The proof technique makes heavy use of the approach we first successfully applied to the quadratic Gaussian two-encoder source-coding problem. Finally, we present a method for outer bounding the rate-distortion region of Gaussian distributed compression problems in which the source variables can be embedded in a Gauss-Markov tree. The outer bound so obtained takes the form of a convex optimization problem. Numerical evaluations demonstrate that the outer bound is close to the Berger-Tung inner bound, coinciding with it in many cases
    corecore