11,328 research outputs found

    Error Rates of Capacity-Achieving Codes Are Convex

    Get PDF
    Motivated by a wide-spread use of convex optimization techniques, convexity properties of bit error rate of the maximum likelihood detector operating in the AWGN channel are studied for arbitrary constellations and bit mappings, which also includes coding under maximum-likelihood decoding. Under this generic setting, the pairwise probability of error and bit error rate are shown to be convex functions of the SNR and noise power in the high SNR/low noise regime with explicitly-determined boundary. Any code, including capacity-achieving ones, whose decision regions include the hardened noise spheres (from the noise sphere hardening argument in the channel coding theorem) satisfies this high SNR requirement and thus has convex error rates in both SNR and noise power. We conjecture that all capacity-achieving codes have convex error rates

    On Convexity of Error Rates in Digital Communications

    Get PDF
    Convexity properties of error rates of a class of decoders, including the maximum-likelihood/min-distance one as a special case, are studied for arbitrary constellations, bit mapping, and coding. Earlier results obtained for the additive white Gaussian noise channel are extended to a wide class of noise densities, including unimodal and spherically invariant noise. Under these broad conditions, symbol and bit error rates are shown to be convex functions of the signal-to-noise ratio (SNR) in the high-SNR regime with an explicitly determined threshold, which depends only on the constellation dimensionality and minimum distance, thus enabling an application of the powerful tools of convex optimization to such digital communication systems in a rigorous way. It is the decreasing nature of the noise power density around the decision region boundaries that ensures the convexity of symbol error rates in the general case. The known high/low-SNR bounds of the convexity/concavity regions are tightened and no further improvement is shown to be possible in general. The high-SNR bound fits closely into the channel coding theorem: all codes, including capacity-achieving ones, whose decision regions include the hardened noise spheres (from the noise sphere hardening argument in the channel coding theorem), satisfy this high-SNR requirement and thus has convex error rates in both SNR and noise power. We conjecture that all capacity-achieving codes have convex error rates. Convexity properties in signal amplitude and noise power are also investigated. Some applications of the results are discussed. In particular, it is shown that fading is convexity-preserving and is never good in low dimensions under spherically invariant noise, which may also include any linear diversity combining

    On Convexity of Error Rates in Digital Communications

    Get PDF
    Convexity properties of error rates of a class of decoders, including the maximum-likelihood/min-distance one as a special case, are studied for arbitrary constellations, bit mapping, and coding. Earlier results obtained for the additive white Gaussian noise channel are extended to a wide class of noise densities, including unimodal and spherically invariant noise. Under these broad conditions, symbol and bit error rates are shown to be convex functions of the signal-to-noise ratio (SNR) in the high-SNR regime with an explicitly determined threshold, which depends only on the constellation dimensionality and minimum distance, thus enabling an application of the powerful tools of convex optimization to such digital communication systems in a rigorous way. It is the decreasing nature of the noise power density around the decision region boundaries that ensures the convexity of symbol error rates in the general case. The known high/low-SNR bounds of the convexity/concavity regions are tightened and no further improvement is shown to be possible in general. The high-SNR bound fits closely into the channel coding theorem: all codes, including capacity-achieving ones, whose decision regions include the hardened noise spheres (from the noise sphere hardening argument in the channel coding theorem), satisfy this high-SNR requirement and thus has convex error rates in both SNR and noise power. We conjecture that all capacity-achieving codes have convex error rates. Convexity properties in signal amplitude and noise power are also investigated. Some applications of the results are discussed. In particular, it is shown that fading is convexity-preserving and is never good in low dimensions under spherically invariant noise, which may also include any linear diversity combining

    The price of certainty: "waterslide curves" and the gap to capacity

    Full text link
    The classical problem of reliable point-to-point digital communication is to achieve a low probability of error while keeping the rate high and the total power consumption small. Traditional information-theoretic analysis uses `waterfall' curves to convey the revolutionary idea that unboundedly low probabilities of bit-error are attainable using only finite transmit power. However, practitioners have long observed that the decoder complexity, and hence the total power consumption, goes up when attempting to use sophisticated codes that operate close to the waterfall curve. This paper gives an explicit model for power consumption at an idealized decoder that allows for extreme parallelism in implementation. The decoder architecture is in the spirit of message passing and iterative decoding for sparse-graph codes. Generalized sphere-packing arguments are used to derive lower bounds on the decoding power needed for any possible code given only the gap from the Shannon limit and the desired probability of error. As the gap goes to zero, the energy per bit spent in decoding is shown to go to infinity. This suggests that to optimize total power, the transmitter should operate at a power that is strictly above the minimum demanded by the Shannon capacity. The lower bound is plotted to show an unavoidable tradeoff between the average bit-error probability and the total power used in transmission and decoding. In the spirit of conventional waterfall curves, we call these `waterslide' curves.Comment: 37 pages, 13 figures. Submitted to IEEE Transactions on Information Theory. This version corrects a subtle bug in the proofs of the original submission and improves the bounds significantl

    Re-proving Channel Polarization Theorems: An Extremality and Robustness Analysis

    Get PDF
    The general subject considered in this thesis is a recently discovered coding technique, polar coding, which is used to construct a class of error correction codes with unique properties. In his ground-breaking work, Ar{\i}kan proved that this class of codes, called polar codes, achieve the symmetric capacity --- the mutual information evaluated at the uniform input distribution ---of any stationary binary discrete memoryless channel with low complexity encoders and decoders requiring in the order of O(NlogN)O(N\log N) operations in the block-length NN. This discovery settled the long standing open problem left by Shannon of finding low complexity codes achieving the channel capacity. Polar coding settled an open problem in information theory, yet opened plenty of challenging problems that need to be addressed. A significant part of this thesis is dedicated to advancing the knowledge about this technique in two directions. The first one provides a better understanding of polar coding by generalizing some of the existing results and discussing their implications, and the second one studies the robustness of the theory over communication models introducing various forms of uncertainty or variations into the probabilistic model of the channel.Comment: Preview of my PhD Thesis, EPFL, Lausanne, 2014. For the full version, see http://people.epfl.ch/mine.alsan/publication

    Asynchronous multiple-access channel capacity

    Get PDF
    The capacity region for the discrete memoryless multiple-access channel without time synchronization at the transmitters and receivers is shown to be the same as the known capacity region for the ordinary multiple-access channel. The proof utilizes time sharing of two optimal codes for the ordinary multiple-access channel and uses maximum likelihood decoding over shifts of the hypothesized transmitter words

    Energy-Delay Tradeoff and Dynamic Sleep Switching for Bluetooth-Like Body-Area Sensor Networks

    Full text link
    Wireless technology enables novel approaches to healthcare, in particular the remote monitoring of vital signs and other parameters indicative of people's health. This paper considers a system scenario relevant to such applications, where a smart-phone acts as a data-collecting hub, gathering data from a number of wireless-capable body sensors, and relaying them to a healthcare provider host through standard existing cellular networks. Delay of critical data and sensors' energy efficiency are both relevant and conflicting issues. Therefore, it is important to operate the wireless body-area sensor network at some desired point close to the optimal energy-delay tradeoff curve. This tradeoff curve is a function of the employed physical-layer protocol: in particular, it depends on the multiple-access scheme and on the coding and modulation schemes available. In this work, we consider a protocol closely inspired by the widely-used Bluetooth standard. First, we consider the calculation of the minimum energy function, i.e., the minimum sum energy per symbol that guarantees the stability of all transmission queues in the network. Then, we apply the general theory developed by Neely to develop a dynamic scheduling policy that approaches the optimal energy-delay tradeoff for the network at hand. Finally, we examine the queue dynamics and propose a novel policy that adaptively switches between connected and disconnected (sleeping) modes. We demonstrate that the proposed policy can achieve significant gains in the realistic case where the control "NULL" packets necessary to maintain the connection alive, have a non-zero energy cost, and the data arrival statistics corresponding to the sensed physical process are bursty.Comment: Extended version (with proofs details in the Appendix) of a paper accepted for publication on the IEEE Transactions on Communication

    Capacity-Achieving Ensembles of Accumulate-Repeat-Accumulate Codes for the Erasure Channel with Bounded Complexity

    Full text link
    The paper introduces ensembles of accumulate-repeat-accumulate (ARA) codes which asymptotically achieve capacity on the binary erasure channel (BEC) with {\em bounded complexity}, per information bit, of encoding and decoding. It also introduces symmetry properties which play a central role in the construction of capacity-achieving ensembles for the BEC with bounded complexity. The results here improve on the tradeoff between performance and complexity provided by previous constructions of capacity-achieving ensembles of codes defined on graphs. The superiority of ARA codes with moderate to large block length is exemplified by computer simulations which compare their performance with those of previously reported capacity-achieving ensembles of LDPC and IRA codes. The ARA codes also have the advantage of being systematic.Comment: Submitted to IEEE Trans. on Information Theory, December 1st, 2005. Includes 50 pages and 13 figure
    corecore