57 research outputs found

    Source Polarization

    Get PDF
    The notion of source polarization is introduced and investigated. This complements the earlier work on channel polarization. An application to Slepian-Wolf coding is also considered. The paper is restricted to the case of binary alphabets. Extension of results to non-binary alphabets is discussed briefly.Comment: To be presented at the IEEE 2010 International Symposium on Information Theory

    Channel combining and splitting for cutoff rate improvement

    Get PDF
    The cutoff rate R0(W)R_0(W) of a discrete memoryless channel (DMC) WW is often used as a figure of merit, alongside the channel capacity C(W)C(W). Given a channel WW consisting of two possibly correlated subchannels W1W_1, W2W_2, the capacity function always satisfies C(W1)+C(W2)C(W)C(W_1)+C(W_2) \le C(W), while there are examples for which R0(W1)+R0(W2)>R0(W)R_0(W_1)+R_0(W_2) > R_0(W). This fact that cutoff rate can be ``created'' by channel splitting was noticed by Massey in his study of an optical modulation system modeled as a MM'ary erasure channel. This paper demonstrates that similar gains in cutoff rate can be achieved for general DMC's by methods of channel combining and splitting. Relation of the proposed method to Pinsker's early work on cutoff rate improvement and to Imai-Hirakawa multi-level coding are also discussed.Comment: 5 pages, 7 figures, 2005 IEEE International Symposium on Information Theory, Adelaide, Sept. 4-9, 200

    Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels

    Get PDF
    A method is proposed, called channel polarization, to construct code sequences that achieve the symmetric capacity I(W)I(W) of any given binary-input discrete memoryless channel (B-DMC) WW. The symmetric capacity is the highest rate achievable subject to using the input letters of the channel with equal probability. Channel polarization refers to the fact that it is possible to synthesize, out of NN independent copies of a given B-DMC WW, a second set of NN binary-input channels {WN(i):1iN}\{W_N^{(i)}:1\le i\le N\} such that, as NN becomes large, the fraction of indices ii for which I(WN(i))I(W_N^{(i)}) is near 1 approaches I(W)I(W) and the fraction for which I(WN(i))I(W_N^{(i)}) is near 0 approaches 1I(W)1-I(W). The polarized channels {WN(i)}\{W_N^{(i)}\} are well-conditioned for channel coding: one need only send data at rate 1 through those with capacity near 1 and at rate 0 through the remaining. Codes constructed on the basis of this idea are called polar codes. The paper proves that, given any B-DMC WW with I(W)>0I(W)>0 and any target rate R<I(W)R < I(W), there exists a sequence of polar codes {Cn;n1}\{{\mathscr C}_n;n\ge 1\} such that Cn{\mathscr C}_n has block-length N=2nN=2^n, rate R\ge R, and probability of block error under successive cancellation decoding bounded as P_{e}(N,R) \le \bigoh(N^{-\frac14}) independently of the code rate. This performance is achievable by encoders and decoders with complexity O(NlogN)O(N\log N) for each.Comment: The version which appears in the IEEE Transactions on Information Theory, July 200

    On the Rate of Channel Polarization

    Full text link
    It is shown that for any binary-input discrete memoryless channel WW with symmetric capacity I(W)I(W) and any rate R<I(W)R <I(W), the probability of block decoding error for polar coding under successive cancellation decoding satisfies Pe2NβP_e \le 2^{-N^\beta} for any β<12\beta<\frac12 when the block-length NN is large enough.Comment: Some minor correction

    Trellis coding for high signal-to-noise ratio Gaussian noise channels

    Get PDF
    It is known that under energy constraints it is best to have each code word of a code satisfy the constraint with equality, rather than have the constraint satisfied only in an average sense over all code words. This suggests the use of fixed-composition codes on additive Gaussian noise channels, for which the coding gains achievable by this method are significant, especially in the high signal-to-noise-ratio case. The author examines the possibility of achieving these gains by using fixed-composition trellis codes. Shell-constrained trellis codes are promising in this regard, since they can be decoded by sequential decoding at least at rates below the computational cutoff rate

    Inequality on guessing and its application to sequential decoding

    Get PDF
    Let (X,Y) be a pair of discrete random variables with X taking values from a finite set. Suppose the value of X is to be determined, given the value of Y, by asking questions of the form 'Is X equal to x?' until the answer is 'Yes.' Let G(x|y) denote the number of guesses in any such guessing scheme when X = x, Y = y. The main result is a tight lower bound on nonnegative moments of G(X|Y). As an application, lower bounds are given on the moments of computation in sequential decoding. In particular, a simple derivation of the cutoff rate bound for single-user channels is obtained, and the previously unknown cutoff rate region of multi-access channels is determined

    Markov modulated periodic arrival process offered to an ATM multiplexer

    Get PDF
    When a superposition of on/off sources is offered to a deterministic server, a particular queueing system arises whose analysis has a significant role in ATM based networks. Periodic cell generation during active times is a major feature of these sources. In this paper a new analytical method is provided to solve for this queueing system via an approximation to the transient behavior of the nD/D/1 queue. The solution to the queue length distribution is given in terms of a solution to a linear differential equation with variable coefficients. The technique proposed here has close similarities with the fluid flow approximations and is amenable to extension for more complicated queueing systems with such correlated arrival processes. A numerical example for a packetized voice multiplexer is finally given to demonstrate our results

    Joint source-channel coding and guessing

    Get PDF
    We consider the joint source-channel guessing problem, define measures of optimum performance, and give single-letter characterizations. As an application, sequential decoding is considered
    corecore