1,543 research outputs found

    Performance Bounds for Erasure, List, and Decision Feedback Schemes With Linear Block Codes

    Full text link

    Error-and-Erasure Decoding for Block Codes with Feedback

    Get PDF
    Inner and outer bounds are derived on the optimal performance of fixed length block codes on discrete memoryless channels with feedback and errors-and-erasures decoding. First an inner bound is derived using a two phase encoding scheme with communication and control phases together with the optimal decoding rule for the given encoding scheme, among decoding rules that can be represented in terms of pairwise comparisons between the messages. Then an outer bound is derived using a generalization of the straight-line bound to errors-and-erasures decoders and the optimal error exponent trade off of a feedback encoder with two messages. In addition upper and lower bounds are derived, for the optimal erasure exponent of error free block codes in terms of the rate. Finally we present a proof of the fact that the optimal trade off between error exponents of a two message code does not increase with feedback on DMCs.Comment: 33 pages, 1 figure

    Exponential bounds on error probability with Feedback

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 95-97).Feedback is useful in memoryless channels for decreasing complexity and increasing reliability; the capacity of the memoryless channels, however, can not be increased by feedback. For fixed length block codes even the decay rate of error probability with block length does not increase with feedback for most channel models. Consequently for making the physical layer more reliable for higher layers one needs go beyond the framework of fixed length block codes and consider relaxations like variable-length coding, error- erasure decoding. We strengthen and quantify this observation by investigating three problems. 1. Error-Erasure Decoding for Fixed-Length Block Codes with Feedback: Error-erasure codes with communication and control phases, introduced by Yamamoto and Itoh, are building blocks for optimal variable-length block codes. We improve their performance by changing the decoding scheme and tuning the durations of the phases, and establish inner bounds to the tradeoff between error exponent, erasure exponent and rate. We bound the loss of performance due to the encoding scheme of Yamamoto-Itoh from above by deriving outer bounds to the tradeoff between error exponent, erasure exponent and rate both with and without feedback. We also consider the zero error codes with erasures and establish inner and outer bounds to the optimal erasure exponent of zero error codes. In addition we present a proof of the long known fact that, the error exponent tradeoff between two messages is not improved with feedback. 2. Unequal Error Protection for Variable-Length Block Codes with Feedback: We use Kudrayashov's idea of implicit confirmations and explicit rejections in the framework of unequal error protection to establish inner bounds to the achievable pairs of rate vectors and error exponent vectors. Then we derive an outer bound that matches the inner bound using a new bounding technique. As a result we characterize the region of achievable rate vector and error exponent vector pairs for bit-wise unequal error protection problem for variable-length block codes with feedback. Furthermore we consider the single message message-wise unequal error protection problem and determine an analytical expression for the missed detection exponent in terms of rate and error exponent, for variable-length block codes with feedback. 3. Feedback Encoding Schemes for Fixed-Length Block Codes: We modify the analysis technique of Gallager to bound the error probability of feedback encoding schemes. Using the encoding schemes suggested by Zigangirov, D'yachkov and Burnashev we recover or improve all previously known lower bounds on the error exponents of fixedlength block codes.by Bariş Nakiboḡlu.Ph.D

    Dynamic Rate Adaptation for Improved Throughput and Delay in Wireless Network Coded Broadcast

    Get PDF
    In this paper we provide theoretical and simulation-based study of the delivery delay performance of a number of existing throughput optimal coding schemes and use the results to design a new dynamic rate adaptation scheme that achieves improved overall throughput-delay performance. Under a baseline rate control scheme, the receivers' delay performance is examined. Based on their Markov states, the knowledge difference between the sender and receiver, three distinct methods for packet delivery are identified: zero state, leader state and coefficient-based delivery. We provide analyses of each of these and show that, in many cases, zero state delivery alone presents a tractable approximation of the expected packet delivery behaviour. Interestingly, while coefficient-based delivery has so far been treated as a secondary effect in the literature, we find that the choice of coefficients is extremely important in determining the delay, and a well chosen encoding scheme can, in fact, contribute a significant improvement to the delivery delay. Based on our delivery delay model, we develop a dynamic rate adaptation scheme which uses performance prediction models to determine the sender transmission rate. Surprisingly, taking this approach leads us to the simple conclusion that the sender should regulate its addition rate based on the total number of undelivered packets stored at the receivers. We show that despite its simplicity, our proposed dynamic rate adaptation scheme results in noticeably improved throughput-delay performance over existing schemes in the literature.Comment: 14 pages, 15 figure

    A Universal Scheme for Wyner–Ziv Coding of Discrete Sources

    Get PDF
    We consider the Wyner–Ziv (WZ) problem of lossy compression where the decompressor observes a noisy version of the source, whose statistics are unknown. A new family of WZ coding algorithms is proposed and their universal optimality is proven. Compression consists of sliding-window processing followed by Lempel–Ziv (LZ) compression, while the decompressor is based on a modification of the discrete universal denoiser (DUDE) algorithm to take advantage of side information. The new algorithms not only universally attain the fundamental limits, but also suggest a paradigm for practical WZ coding. The effectiveness of our approach is illustrated with experiments on binary images, and English text using a low complexity algorithm motivated by our class of universally optimal WZ codes

    Lists that are smaller than their parts: A coding approach to tunable secrecy

    Get PDF
    We present a new information-theoretic definition and associated results, based on list decoding in a source coding setting. We begin by presenting list-source codes, which naturally map a key length (entropy) to list size. We then show that such codes can be analyzed in the context of a novel information-theoretic metric, \epsilon-symbol secrecy, that encompasses both the one-time pad and traditional rate-based asymptotic metrics, but, like most cryptographic constructs, can be applied in non-asymptotic settings. We derive fundamental bounds for \epsilon-symbol secrecy and demonstrate how these bounds can be achieved with MDS codes when the source is uniformly distributed. We discuss applications and implementation issues of our codes.Comment: Allerton 2012, 8 page

    The price of certainty: "waterslide curves" and the gap to capacity

    Full text link
    The classical problem of reliable point-to-point digital communication is to achieve a low probability of error while keeping the rate high and the total power consumption small. Traditional information-theoretic analysis uses `waterfall' curves to convey the revolutionary idea that unboundedly low probabilities of bit-error are attainable using only finite transmit power. However, practitioners have long observed that the decoder complexity, and hence the total power consumption, goes up when attempting to use sophisticated codes that operate close to the waterfall curve. This paper gives an explicit model for power consumption at an idealized decoder that allows for extreme parallelism in implementation. The decoder architecture is in the spirit of message passing and iterative decoding for sparse-graph codes. Generalized sphere-packing arguments are used to derive lower bounds on the decoding power needed for any possible code given only the gap from the Shannon limit and the desired probability of error. As the gap goes to zero, the energy per bit spent in decoding is shown to go to infinity. This suggests that to optimize total power, the transmitter should operate at a power that is strictly above the minimum demanded by the Shannon capacity. The lower bound is plotted to show an unavoidable tradeoff between the average bit-error probability and the total power used in transmission and decoding. In the spirit of conventional waterfall curves, we call these `waterslide' curves.Comment: 37 pages, 13 figures. Submitted to IEEE Transactions on Information Theory. This version corrects a subtle bug in the proofs of the original submission and improves the bounds significantl
    corecore