69,632 research outputs found

    Deterministic Construction of Binary, Bipolar and Ternary Compressed Sensing Matrices

    Full text link
    In this paper we establish the connection between the Orthogonal Optical Codes (OOC) and binary compressed sensing matrices. We also introduce deterministic bipolar m×nm\times n RIP fulfilling ±1\pm 1 matrices of order kk such that m≀O(k(log⁥2n)log⁥2kln⁥log⁥2k)m\leq\mathcal{O}\big(k (\log_2 n)^{\frac{\log_2 k}{\ln \log_2 k}}\big). The columns of these matrices are binary BCH code vectors where the zeros are replaced by -1. Since the RIP is established by means of coherence, the simple greedy algorithms such as Matching Pursuit are able to recover the sparse solution from the noiseless samples. Due to the cyclic property of the BCH codes, we show that the FFT algorithm can be employed in the reconstruction methods to considerably reduce the computational complexity. In addition, we combine the binary and bipolar matrices to form ternary sensing matrices ({0,1,−1}\{0,1,-1\} elements) that satisfy the RIP condition.Comment: The paper is accepted for publication in IEEE Transaction on Information Theor

    On the Complexity of Exact Maximum-Likelihood Decoding for Asymptotically Good Low Density Parity Check Codes: A New Perspective

    Get PDF
    The problem of exact maximum-likelihood (ML) decoding of general linear codes is well-known to be NP-hard. In this paper, we show that exact ML decoding of a class of asymptotically good low density parity check codes — expander codes — over binary symmetric channels (BSCs) is possible with an average-case polynomial complexity. This offers a new way of looking at the complexity issue of exact ML decoding for communication systems where the randomness in channel plays a fundamental central role. More precisely, for any bit-flipping probability p in a nontrivial range, there exists a rate region of non-zero support and a family of asymptotically good codes which achieve error probability exponentially decaying in coding length n while admitting exact ML decoding in average-case polynomial time. As p approaches zero, this rate region approaches the Shannon channel capacity region. Similar results can be extended to AWGN channels, suggesting it may be feasible to eliminate the error floor phenomenon associated with belief-propagation decoding of LDPC codes in the high SNR regime. The derivations are based on a hierarchy of ML certificate decoding algorithms adaptive to the channel realization. In this process, we propose an efficient O(n^2) new ML certificate algorithm based on the max-flow algorithm. Moreover, exact ML decoding of the considered class of codes constructed from LDPC codes with regular left degree, of which the considered expander codes are a special case, remains NP-hard; thus giving an interesting contrast between the worst-case and average-case complexities

    Probabilistic Shaping for Finite Blocklengths: Distribution Matching and Sphere Shaping

    Get PDF
    In this paper, we provide for the first time a systematic comparison of distribution matching (DM) and sphere shaping (SpSh) algorithms for short blocklength probabilistic amplitude shaping. For asymptotically large blocklengths, constant composition distribution matching (CCDM) is known to generate the target capacity-achieving distribution. As the blocklength decreases, however, the resulting rate loss diminishes the efficiency of CCDM. We claim that for such short blocklengths and over the additive white Gaussian channel (AWGN), the objective of shaping should be reformulated as obtaining the most energy-efficient signal space for a given rate (rather than matching distributions). In light of this interpretation, multiset-partition DM (MPDM), enumerative sphere shaping (ESS) and shell mapping (SM), are reviewed as energy-efficient shaping techniques. Numerical results show that MPDM and SpSh have smaller rate losses than CCDM. SpSh--whose sole objective is to maximize the energy efficiency--is shown to have the minimum rate loss amongst all. We provide simulation results of the end-to-end decoding performance showing that up to 1 dB improvement in power efficiency over uniform signaling can be obtained with MPDM and SpSh at blocklengths around 200. Finally, we present a discussion on the complexity of these algorithms from the perspective of latency, storage and computations.Comment: 18 pages, 10 figure

    Construction of Polar Codes with Sublinear Complexity

    Full text link
    Consider the problem of constructing a polar code of block length NN for the transmission over a given channel WW. Typically this requires to compute the reliability of all the NN synthetic channels and then to include those that are sufficiently reliable. However, we know from [1], [2] that there is a partial order among the synthetic channels. Hence, it is natural to ask whether we can exploit it to reduce the computational burden of the construction problem. We show that, if we take advantage of the partial order [1], [2], we can construct a polar code by computing the reliability of roughly a fraction 1/log⁥3/2N1/\log^{3/2} N of the synthetic channels. In particular, we prove that N/log⁥3/2NN/\log^{3/2} N is a lower bound on the number of synthetic channels to be considered and such a bound is tight up to a multiplicative factor log⁥log⁥N\log\log N. This set of roughly N/log⁥3/2NN/\log^{3/2} N synthetic channels is universal, in the sense that it allows one to construct polar codes for any WW, and it can be identified by solving a maximum matching problem on a bipartite graph. Our proof technique consists of reducing the construction problem to the problem of computing the maximum cardinality of an antichain for a suitable partially ordered set. As such, this method is general and it can be used to further improve the complexity of the construction problem in case a new partial order on the synthetic channels of polar codes is discovered.Comment: 9 pages, 3 figures, presented at ISIT'17 and submitted to IEEE Trans. Inform. Theor
    • 

    corecore