977 research outputs found

    An entropy inequality for q-ary random variables and its application to channel polarization

    Full text link
    It is shown that given two copies of a q-ary input channel WW, where q is prime, it is possible to create two channels WW^- and W+W^+ whose symmetric capacities satisfy I(W)I(W)I(W+)I(W^-)\le I(W)\le I(W^+), where the inequalities are strict except in trivial cases. This leads to a simple proof of channel polarization in the q-ary case.Comment: To be presented at the IEEE 2010 International Symposium on Information Theor

    Source and Channel Polarization over Finite Fields and Reed-Solomon Matrices

    Full text link
    Polarization phenomenon over any finite field Fq\mathbb{F}_{q} with size qq being a power of a prime is considered. This problem is a generalization of the original proposal of channel polarization by Arikan for the binary field, as well as its extension to a prime field by Sasoglu, Telatar, and Arikan. In this paper, a necessary and sufficient condition of a matrix over a finite field Fq\mathbb{F}_q is shown under which any source and channel are polarized. Furthermore, the result of the speed of polarization for the binary alphabet obtained by Arikan and Telatar is generalized to arbitrary finite field. It is also shown that the asymptotic error probability of polar codes is improved by using the Reed-Solomon matrix, which can be regarded as a natural generalization of the 2×22\times 2 binary matrix used in the original proposal by Arikan.Comment: 17 pages, 3 figures, accepted for publication in the IEEE Transactions on Information Theor

    General Strong Polarization

    Full text link
    Arikan's exciting discovery of polar codes has provided an altogether new way to efficiently achieve Shannon capacity. Given a (constant-sized) invertible matrix MM, a family of polar codes can be associated with this matrix and its ability to approach capacity follows from the {\em polarization} of an associated [0,1][0,1]-bounded martingale, namely its convergence in the limit to either 00 or 11. Arikan showed polarization of the martingale associated with the matrix G2=(1011)G_2 = \left(\begin{matrix} 1& 0 1& 1\end{matrix}\right) to get capacity achieving codes. His analysis was later extended to all matrices MM that satisfy an obvious necessary condition for polarization. While Arikan's theorem does not guarantee that the codes achieve capacity at small blocklengths, it turns out that a "strong" analysis of the polarization of the underlying martingale would lead to such constructions. Indeed for the martingale associated with G2G_2 such a strong polarization was shown in two independent works ([Guruswami and Xia, IEEE IT '15] and [Hassani et al., IEEE IT '14]), resolving a major theoretical challenge of the efficient attainment of Shannon capacity. In this work we extend the result above to cover martingales associated with all matrices that satisfy the necessary condition for (weak) polarization. In addition to being vastly more general, our proofs of strong polarization are also simpler and modular. Specifically, our result shows strong polarization over all prime fields and leads to efficient capacity-achieving codes for arbitrary symmetric memoryless channels. We show how to use our analyses to achieve exponentially small error probabilities at lengths inverse polynomial in the gap to capacity. Indeed we show that we can essentially match any error probability with lengths that are only inverse polynomial in the gap to capacity.Comment: 73 pages, 2 figures. The final version appeared in JACM. This paper combines results presented in preliminary form at STOC 2018 and RANDOM 201

    Fast Polarization for Processes with Memory

    Full text link
    Fast polarization is crucial for the performance guarantees of polar codes. In the memoryless setting, the rate of polarization is known to be exponential in the square root of the block length. A complete characterization of the rate of polarization for models with memory has been missing. Namely, previous works have not addressed fast polarization of the high entropy set under memory. We consider polar codes for processes with memory that are characterized by an underlying ergodic finite-state Markov chain. We show that the rate of polarization for these processes is the same as in the memoryless setting, both for the high and for the low entropy sets.Comment: 17 pages, 3 figures. Submitted to IEEE Transactions on Information Theor

    An Entropy Sumset Inequality and Polynomially Fast Convergence to Shannon Capacity Over All Alphabets

    Get PDF
    We prove a lower estimate on the increase in entropy when two copies of a conditional random variable X | Y, with X supported on Z_q={0,1,...,q-1} for prime q, are summed modulo q. Specifically, given two i.i.d. copies (X_1,Y_1) and (X_2,Y_2) of a pair of random variables (X,Y), with X taking values in Z_q, we show H(X_1 + X_2 mid Y_1, Y_2) - H(X|Y) >=e alpha(q) * H(X|Y) (1-H(X|Y)) for some alpha(q) > 0, where H(.) is the normalized (by factor log_2(q)) entropy. In particular, if X | Y is not close to being fully random or fully deterministic and H(X| Y) in (gamma,1-gamma), then the entropy of the sum increases by Omega_q(gamma). Our motivation is an effective analysis of the finite-length behavior of polar codes, for which the linear dependence on gamma is quantitatively important. The assumption of q being prime is necessary: for X supported uniformly on a proper subgroup of Z_q we have H(X+X)=H(X). For X supported on infinite groups without a finite subgroup (the torsion-free case) and no conditioning, a sumset inequality for the absolute increase in (unnormalized) entropy was shown by Tao in [Tao, CP&R 2010]. We use our sumset inequality to analyze Ari kan\u27s construction of polar codes and prove that for any q-ary source X, where q is any fixed prime, and anyepsilon > 0, polar codes allow efficient data compression of N i.i.d. copies of X into (H(X)+epsilon)N q-ary symbols, as soon as N is polynomially large in 1/epsilon. We can get capacity-achieving source codes with similar guarantees for composite alphabets, by factoring q into primes and combining different polar codes for each prime in factorization. A consequence of our result for noisy channel coding is that for all discrete memoryless channels, there are explicit codes enabling reliable communication within epsilon > 0 of the symmetric Shannon capacity for a block length and decoding complexity bounded by a polynomial in 1/epsilon. The result was previously shown for the special case of binary-input channels [Guruswami/Xial, FOCS\u2713; Hassani/Alishahi/Urbanke, CoRR 2013], and this work extends the result to channels over any alphabet

    Polar codes for the two-user multiple-access channel

    Full text link
    Arikan's polar coding method is extended to two-user multiple-access channels. It is shown that if the two users of the channel use the Arikan construction, the resulting channels will polarize to one of five possible extremals, on each of which uncoded transmission is optimal. The sum rate achieved by this coding technique is the one that correponds to uniform input distributions. The encoding and decoding complexities and the error performance of these codes are as in the single-user case: O(nlogn)O(n\log n) for encoding and decoding, and o(exp(n1/2ϵ))o(\exp(-n^{1/2-\epsilon})) for block error probability, where nn is the block length.Comment: 12 pages. Submitted to the IEEE Transactions on Information Theor

    Polar Codes with exponentially small error at finite block length

    Get PDF
    We show that the entire class of polar codes (up to a natural necessary condition) converge to capacity at block lengths polynomial in the gap to capacity, while simultaneously achieving failure probabilities that are exponentially small in the block length (i.e., decoding fails with probability exp(NΩ(1))\exp(-N^{\Omega(1)}) for codes of length NN). Previously this combination was known only for one specific family within the class of polar codes, whereas we establish this whenever the polar code exhibits a condition necessary for any polarization. Our results adapt and strengthen a local analysis of polar codes due to the authors with Nakkiran and Rudra [Proc. STOC 2018]. Their analysis related the time-local behavior of a martingale to its global convergence, and this allowed them to prove that the broad class of polar codes converge to capacity at polynomial block lengths. Their analysis easily adapts to show exponentially small failure probabilities, provided the associated martingale, the ``Arikan martingale'', exhibits a corresponding strong local effect. The main contribution of this work is a much stronger local analysis of the Arikan martingale. This leads to the general result claimed above. In addition to our general result, we also show, for the first time, polar codes that achieve failure probability exp(Nβ)\exp(-N^{\beta}) for any β<1\beta < 1 while converging to capacity at block length polynomial in the gap to capacity. Finally we also show that the ``local'' approach can be combined with any analysis of failure probability of an arbitrary polar code to get essentially the same failure probability while achieving block length polynomial in the gap to capacity.Comment: 17 pages, Appeared in RANDOM'1

    On the Finite Length Scaling of Ternary Polar Codes

    Full text link
    The polarization process of polar codes over a ternary alphabet is studied. Recently it has been shown that the scaling of the blocklength of polar codes with prime alphabet size scales polynomially with respect to the inverse of the gap between code rate and channel capacity. However, except for the binary case, the degree of the polynomial in the bound is extremely large. In this work, it is shown that a much lower degree polynomial can be computed numerically for the ternary case. Similar results are conjectured for the general case of prime alphabet size.Comment: Submitted to ISIT 201
    corecore