1,098 research outputs found

    Coding for Communications and Secrecy

    Get PDF
    Shannon, in his landmark 1948 paper, developed a framework for characterizing the fundamental limits of information transmission. Among other results, he showed that reliable communication over a channel is possible at any rate below its capacity. In 2008, Arikan discovered polar codes; the only class of explicitly constructed low-complexity codes that achieve the capacity of any binary-input memoryless symmetric-output channel. Arikan's polar transform turns independent copies of a noisy channel into a collection of synthetic almost-noiseless and almost-useless channels. Polar codes are realized by sending data bits over the almost-noiseless channels and recovering them by using a low-complexity successive-cancellation (SC) decoder, at the receiver. In the first part of this thesis, we study polar codes for communications. When the underlying channel is an erasure channel, we show that almost all correlation coefficients between the erasure events of the synthetic channels decay rapidly. Hence, the sum of the erasure probabilities of the information-carrying channels is a tight estimate of the block-error probability of polar codes when used for communication over the erasure channel. We study SC list (SCL) decoding, a method for boosting the performance of short polar codes. We prove that the method has a numerically stable formulation in log-likelihood ratios. In hardware, this formulation increases the decoding throughput by 53% and reduces the decoder's size about 33%. We present empirical results on the trade-off between the length of the CRC and the performance gains in a CRC-aided version of the list decoder. We also make numerical comparisons of the performance of long polar codes under SC decoding with that of short polar codes under SCL decoding. Shannon's framework also quantifies the secrecy of communications. Wyner, in 1975, proposed a model for communications in the presence of an eavesdropper. It was shown that, at rates below the secrecy capacity, there exist reliable communication schemes in which the amount of information leaked to the eavesdropper decays exponentially in the block-length of the code. In the second part of this thesis, we study the rate of this decay. We derive the exact exponential decay rate of the ensemble-average of the information leaked to the eavesdropper in Wyner's model when a randomly constructed code is used for secure communications. For codes sampled from the ensemble of i.i.d. random codes, we show that the previously known lower bound to the exponent is exact. Our ensemble-optimal exponent for random constant-composition codes improves the lower bound extant in the literature. Finally, we show that random linear codes have the same secrecy power as i.i.d. random codes. The key to securing messages against an eavesdropper is to exploit the randomness of her communication channel so that the statistics of her observation resembles that of a pure noise process for any sent message. We study the effect of feedback on this approximation and show that it does not reduce the minimum entropy rate required to approximate a given process. However, we give examples where variable-length schemes achieve much larger exponents in this approximation in the presence of feedback than the exponents in systems without feedback. Upper-bounding the best exponent that block codes attain, we conclude that variable-length coding is necessary for achieving the improved exponents

    Error Exponents for Variable-length Block Codes with Feedback and Cost Constraints

    Get PDF
    Variable-length block-coding schemes are investigated for discrete memoryless channels with ideal feedback under cost constraints. Upper and lower bounds are found for the minimum achievable probability of decoding error Pe,minP_{e,\min} as a function of constraints R, \AV, and τˉ\bar \tau on the transmission rate, average cost, and average block length respectively. For given RR and \AV, the lower and upper bounds to the exponent (lnPe,min)/τˉ-(\ln P_{e,\min})/\bar \tau are asymptotically equal as τˉ\bar \tau \to \infty. The resulting reliability function, limτˉ(lnPe,min)/τˉ\lim_{\bar \tau\to \infty} (-\ln P_{e,\min})/\bar \tau, as a function of RR and \AV, is concave in the pair (R, \AV) and generalizes the linear reliability function of Burnashev to include cost constraints. The results are generalized to a class of discrete-time memoryless channels with arbitrary alphabets, including additive Gaussian noise channels with amplitude and power constraints

    Error Correcting Codes for Distributed Control

    Get PDF
    The problem of stabilizing an unstable plant over a noisy communication link is an increasingly important one that arises in applications of networked control systems. Although the work of Schulman and Sahai over the past two decades, and their development of the notions of "tree codes"\phantom{} and "anytime capacity", provides the theoretical framework for studying such problems, there has been scant practical progress in this area because explicit constructions of tree codes with efficient encoding and decoding did not exist. To stabilize an unstable plant driven by bounded noise over a noisy channel one needs real-time encoding and real-time decoding and a reliability which increases exponentially with decoding delay, which is what tree codes guarantee. We prove that linear tree codes occur with high probability and, for erasure channels, give an explicit construction with an expected decoding complexity that is constant per time instant. We give novel sufficient conditions on the rate and reliability required of the tree codes to stabilize vector plants and argue that they are asymptotically tight. This work takes an important step towards controlling plants over noisy channels, and we demonstrate the efficacy of the method through several examples.Comment: 39 page
    corecore