241 research outputs found

    LPN Decoded

    Get PDF
    We propose new algorithms with small memory consumption for the Learning Parity with Noise (LPN) problem, both classically and quantumly. Our goal is to predict the hardness of LPN depending on both parameters, its dimension kk and its noise rate τ\tau, as accurately as possible both in theory and practice. Therefore, we analyze our algorithms asymptotically, run experiments on medium size parameters and provide bit complexity predictions for large parameters. Our new algorithms are modifications and extensions of the simple Gaussian elimination algorithm with recent advanced techniques for decoding random linear codes. Moreover, we enhance our algorithms by the dimension reduction technique from Blum, Kalai, Wasserman. This results in a hybrid algorithm that is capable for achieving the best currently known run time for any fixed amount of memory. On the asymptotic side, we achieve significant improvements for the run time exponents, both classically and quantumly. To the best of our knowledge, we provide the first quantum algorithms for LPN. Due to the small memory consumption of our algorithms, we are able to solve for the first time LPN instances of medium size, e.g. with k=243,τ=18k=243, \tau = \frac 1 8 in only 15 days on 64 threads. Our algorithms result in bit complexity prediction that require relatively large kk for small τ\tau. For instance for small noise LPN with τ=1k\tau= \frac 1 {\sqrt k}, we predict 8080-bit classical and only 6464-bit quantum security for k ≥ 2048k~\geq~2048. For the common cryptographic choice k=512,τ=18k=512, \tau = \frac 1 8, we achieve with limited memory classically 9797-bit and quantumly 7070-bit security

    A CCA2 Secure Variant of the McEliece Cryptosystem

    Get PDF
    The McEliece public-key encryption scheme has become an interesting alternative to cryptosystems based on number-theoretical problems. Differently from RSA and ElGa- mal, McEliece PKC is not known to be broken by a quantum computer. Moreover, even tough McEliece PKC has a relatively big key size, encryption and decryption operations are rather efficient. In spite of all the recent results in coding theory based cryptosystems, to the date, there are no constructions secure against chosen ciphertext attacks in the standard model - the de facto security notion for public-key cryptosystems. In this work, we show the first construction of a McEliece based public-key cryptosystem secure against chosen ciphertext attacks in the standard model. Our construction is inspired by a recently proposed technique by Rosen and Segev

    MIMO-UFMC Transceiver Schemes for Millimeter Wave Wireless Communications

    Full text link
    The UFMC modulation is among the most considered solutions for the realization of beyond-OFDM air interfaces for future wireless networks. This paper focuses on the design and analysis of an UFMC transceiver equipped with multiple antennas and operating at millimeter wave carrier frequencies. The paper provides the full mathematical model of a MIMO-UFMC transceiver, taking into account the presence of hybrid analog/digital beamformers at both ends of the communication links. Then, several detection structures are proposed, both for the case of single-packet isolated transmission, and for the case of multiple-packet continuous transmission. In the latter situation, the paper also considers the case in which no guard time among adjacent packets is inserted, trading off an increased level of interference with higher values of spectral efficiency. At the analysis stage, the several considered detection structures and transmission schemes are compared in terms of bit-error-rate, root-mean-square-error, and system throughput. The numerical results show that the proposed transceiver algorithms are effective and that the linear MMSE data detector is capable of well managing the increased interference brought by the removal of guard times among consecutive packets, thus yielding throughput gains of about 10 - 13 %\%. The effect of phase noise at the receiver is also numerically assessed, and it is shown that the recursive implementation of the linear MMSE exhibits some degree of robustness against this disturbance

    Reinforcement-based data transmission in temporally-correlated fading channels: Partial CSIT scenario

    Get PDF
    Reinforcement algorithms refer to the schemes where the results of the previous trials and a reward-punishment rule are used for parameter setting in the next steps. In this paper, we use the concept of reinforcement algorithms to develop different data transmission models in wireless networks. Considering temporally-correlated fading channels, the results are presented for the cases with partial channel state information at the transmitter (CSIT). As demonstrated, the implementation of reinforcement algorithms improves the performance of communication setups remarkably, with the same feedback load/complexity as in the state-of-the-art schemes.Comment: Accepted for publication in ISWCS 201

    On the Capacity of the Finite Field Counterparts of Wireless Interference Networks

    Full text link
    This work explores how degrees of freedom (DoF) results from wireless networks can be translated into capacity results for their finite field counterparts that arise in network coding applications. The main insight is that scalar (SISO) finite field channels over Fpn\mathbb{F}_{p^n} are analogous to n x n vector (MIMO) channels in the wireless setting, but with an important distinction -- there is additional structure due to finite field arithmetic which enforces commutativity of matrix multiplication and limits the channel diversity to n, making these channels similar to diagonal channels in the wireless setting. Within the limits imposed by the channel structure, the DoF optimal precoding solutions for wireless networks can be translated into capacity optimal solutions for their finite field counterparts. This is shown through the study of the 2-user X channel and the 3-user interference channel. Besides bringing the insights from wireless networks into network coding applications, the study of finite field networks over Fpn\mathbb{F}_{p^n} also touches upon important open problems in wireless networks (finite SNR, finite diversity scenarios) through interesting parallels between p and SNR, and n and diversity.Comment: Full version of paper accepted for presentation at ISIT 201

    Efficient decoder design for error correcting codes

    Get PDF
    Error correctiong codes (ECC) are widly used in applications to correct errors in data transmission over unreliable or noisy communication channels. Recently, two kinds of promising codes attracted lots of research interest because they provide excellent error correction performance. One is non-binary LDPC codes, and the other is polar codes. This dissertation focuses on efficient decoding algorithms and decoder design for thesetwo types of codes.Non-binary low-density parity-check (LDPC) codes have some advantages over their binary counterparts, but unfortunately their decoding complexity is a significant challenge. The iterative hard- and soft-reliability based majority-logic decoding algorithms are attractive for non-binary LDPC codes, since they involve only finite field additions and multiplications as well as integer operations and hence have significantly lower complexity than other algorithms. We propose two improvements to the majority-logic decoding algorithms. Instead of the accumulation of reliability information in the ex-isting majority-logic decoding algorithms, our first improvement is a new reliability information update. The new update not only results in better error performance and fewer iterations on average, but also further reduces computational complexity. Since existing majority-logic decoding algorithms tend to have a high error floor for codes whose parity check matrices have low column weights, our second improvement is a re-selection scheme, which leads to much lower error floors, at the expense of more finite field operations and integer operations, by identifying periodic points, re-selectingintermediate hard decisions, and changing reliability information.Polar codes are of great interests because they provably achieve the symmetric capacity of discrete memoryless channels with arbitrary input alphabet sizes an explicit construction. Most existing decoding algorithms of polar codes are based on bit-wise hard or soft decisions. We propose symbol-decision successive cancellation (SC) and successive cancellation list (SCL) decoders for polar codes, which use symbol-wise hard or soft decisions for higher throughput or better error performance. Then wepropose to use a recursive channel combination to calculate symbol-wise channel transition probabilities, which lead to symbol decisions. Our proposed recursive channel combination has lower complexity than simply combining bit-wise channel transition probabilities. The similarity between our proposed method and Arıkan’s channel transformations also helps to share hardware resources between calculating bit- and symbol-wise channel transition probabilities. To reduce the complexity of the list pruning, atwo-stage list pruning network is proposed to provide a trade-off between the error performance and the complexity of the symbol-decision SCL decoder. Since memory is a significant part of SCL decoders, we also propose a pre-computation memory-saving technique to reduce memory requirement of an SCL decoder.To reduce the complexity of the recursive channel combination further, we propose an approximate ML (AML) decoding unit for SCL decoders. In particular, we investigate the distribution of frozen bits of polar codes designed for both the binary erasure and additive white Gaussian noise channels, and take advantage of the distribution to reduce the complexity of the AML decoding unit, improving the throughput-area efficiency of SCL decoders.Furthermore, to adapt to variable throughput or latency requirements which exist widely in current communication applications, a multi-mode SCL decoder with variable list sizes and parallelism is proposed. If high throughput or small latency is required, the decoder decodes multiple received words in parallel with a small list size. However, if error performance is of higher priority, the multi-mode decoder switches to a serialmode with a bigger list size. Therefore, the multi-mode SCL decoder provides a flexible tradeoff between latency, throughput and error performance at the expense of small overhead
    • …
    corecore