21 research outputs found
Sparse Multi-Decoder Recursive Projection Aggregation for Reed-Muller Codes
Reed-Muller (RM) codes are one of the oldest families of codes. Recently, a
recursive projection aggregation (RPA) decoder has been proposed, which
achieves a performance that is close to the maximum likelihood decoder for
short-length RM codes. One of its main drawbacks, however, is the large amount
of computations needed. In this paper, we devise a new algorithm to lower the
computational budget while keeping a performance close to that of the RPA
decoder. The proposed approach consists of multiple sparse RPAs that are
generated by performing only a selection of projections in each sparsified
decoder. In the end, a cyclic redundancy check (CRC) is used to decide between
output codewords. Simulation results show that our proposed approach reduces
the RPA decoder's computations up to with negligible performance loss.Comment: 6 pages, 12 figure
Confident decoding with GRAND
We establish that during the execution of any Guessing Random Additive Noise
Decoding (GRAND) algorithm, an interpretable, useful measure of decoding
confidence can be evaluated. This measure takes the form of a log-likelihood
ratio (LLR) of the hypotheses that, should a decoding be found by a given
query, the decoding is correct versus its being incorrect. That LLR can be used
as soft output for a range of applications and we demonstrate its utility by
showing that it can be used to confidently discard likely erroneous decodings
in favor of returning more readily managed erasures. As an application, we show
that feature can be used to compromise the physical layer security of short
length wiretap codes by accurately and confidently revealing a proportion of a
communication when code-rate is above capacity
Bit flipping decoding for binary product codes
Error control coding has been used to mitigate the impact of noise on the wireless channel.
Today, wireless communication systems have in their design Forward Error Correction (FEC)
techniques to help reduce the amount of retransmitted data. When designing a coding scheme,
three challenges need to be addressed, the error correcting capability of the code, the decoding
complexity of the code and the delay introduced by the coding scheme. While it is easy to design
coding schemes with a large error correcting capability, it is a challenge finding decoding
algorithms for these coding schemes. Generally increasing the length of a block code increases
its error correcting capability and its decoding complexity.
Product codes have been identified as a means to increase the block length of simpler codes,
yet keep their decoding complexity low. Bit flipping decoding has been identified as simple to
implement decoding algorithm. Research has generally been focused on improving bit flipping
decoding for Low Density Parity Check codes. In this study we develop a new decoding
algorithm based on syndrome checking and bit flipping to use for binary product codes, to
address the major challenge of coding systems, i.e., developing codes with a large error
correcting capability yet have a low decoding complexity. Simulated results show that the
proposed decoding algorithm outperforms the conventional decoding algorithm proposed by P.
Elias in BER and more significantly in WER performance. The algorithm offers comparable
complexity to the conventional algorithm in the Rayleigh fading channel
Soft-Decoding-Based Strategies for Relay and Interference Channels: Analysis and Achievable Rates Using LDPC Codes
We provide a rigorous mathematical analysis of two communication strategies:
soft decode-and-forward (soft-DF) for relay channels, and soft partial
interference-cancelation (soft-IC) for interference channels. Both strategies
involve soft estimation, which assists the decoding process. We consider LDPC
codes, not because of their practical benefits, but because of their analytic
tractability, which enables an asymptotic analysis similar to random coding
methods of information theory. Unlike some works on the closely-related
demodulate-and-forward, we assume non-memoryless, code-structure-aware
estimation. With soft-DF, we develop {\it simultaneous density evolution} to
bound the decoding error probability at the destination. This result applies to
erasure relay channels. In one variant of soft-DF, the relay applies Wyner-Ziv
coding to enhance its communication with the destination, borrowing from
compress-and-forward. To analyze soft-IC, we adapt existing techniques for
iterative multiuser detection, and focus on binary-input additive white
Gaussian noise (BIAWGN) interference channels. We prove that optimal
point-to-point codes are unsuitable for soft-IC, as well as for all strategies
that apply partial decoding to improve upon single-user detection (SUD) and
multiuser detection (MUD), including Han-Kobayashi (HK).Comment: Accepted to the IEEE Transactions on Information Theory. This is a
major revision of a paper originally submitted in August 201
Advanced channel coding techniques using bit-level soft information
In this dissertation, advanced channel decoding techniques based on bit-level soft information are studied. Two main approaches are proposed: bit-level probabilistic iterative decoding and bit-level algebraic soft-decision (list) decoding (ASD).
In the first part of the dissertation, we first study iterative decoding for high density parity check (HDPC) codes. An iterative decoding algorithm, which uses the sum product algorithm (SPA) in conjunction with a binary parity check matrix adapted in each decoding iteration according to the bit-level reliabilities is proposed. In contrast to the common belief that iterative decoding is not suitable for HDPC codes, this bit-level reliability based adaptation procedure is critical to the conver-gence behavior of iterative decoding for HDPC codes and it significantly improves the iterative decoding performance of Reed-Solomon (RS) codes, whose parity check matrices are in general not sparse. We also present another iterative decoding scheme for cyclic codes by randomly shifting the bit-level reliability values in each iteration. The random shift based adaptation can also prevent iterative decoding from getting stuck with a significant complexity reduction compared with the reliability based parity check matrix adaptation and still provides reasonable good performance for short-length cyclic codes.
In the second part of the dissertation, we investigate ASD for RS codes using bit-level soft information. In particular, we show that by carefully incorporating bit¬level soft information in the multiplicity assignment and the interpolation step, ASD can significantly outperform conventional hard decision decoding (HDD) for RS codes with a very small amount of complexity, even though the kernel of ASD is operating at the symbol-level. More importantly, the performance of the proposed bit-level ASD can be tightly upper bounded for practical high rate RS codes, which is in general not possible for other popular ASD schemes.
Bit-level soft-decision decoding (SDD) serves as an efficient way to exploit the potential gain of many classical codes, and also facilitates the corresponding per-formance analysis. The proposed bit-level SDD schemes are potential and feasible alternatives to conventional symbol-level HDD schemes in many communication sys-tems
Viterbi algorithm in continuous-phase frequency shift keying
The Viterbi algorithm, an application of dynamic programming, is widely used for estimation and detection problems in digital communications and signal processing. It is used to detect signals in communication channels with memory, and to decode sequential error-control codes that are used to enhance the performance of digital communication systems. The Viterbi algorithm is also used in speech and character recognition tasks where the speech signals or characters are modeled by hidden Markov models. This project explains the basics of the Viterbi algorithm as applied to systems in digital communication systems, and speech and character recognition. It also focuses on the operations and the practical memory requirements to implement the Viterbi algorithm in real-time. A forward error correction technique known as convolutional coding with Viterbi decoding was explored. In this project, the basic Viterbi decoder behavior model was built and simulated. The convolutional encoder, BPSK and AWGN channel were implemented in MATLAB code. The BER was tested to evaluate the decoding performance. The theory of Viterbi Algorithm is introduced based on convolutional coding. The application of Viterbi Algorithm in the Continuous-Phase Frequency Shift Keying (CPFSK) is presented. Analysis for the performance is made and compared with the conventional coherent estimator. The main issue of this thesis is to implement the RTL level model of Viterbi decoder. The RTL Viterbi decoder model includes the Branch Metric block, the Add-Compare-Select block, the trace-back block, the decoding block and next state block. With all done, we further understand about the Viterbi decoding algorithm
Cryptography based on the Hardness of Decoding
This thesis provides progress in the fields of for lattice and coding based cryptography. The first contribution consists of constructions of IND-CCA2 secure public key cryptosystems from both the McEliece and the low noise learning parity with noise assumption. The second contribution is a novel instantiation of the lattice-based learning with errors problem which uses uniform errors
Sur l'algorithme de décodage en liste de Guruswami-Sudan sur les anneaux finis
This thesis studies the algorithmic techniques of list decoding, first proposed by Guruswami and Sudan in 1998, in the context of Reed-Solomon codes over finite rings. Two approaches are considered. First we adapt the Guruswami-Sudan (GS) list decoding algorithm to generalized Reed-Solomon (GRS) codes over finite rings with identity. We study in details the complexities of the algorithms for GRS codes over Galois rings and truncated power series rings. Then we explore more deeply a lifting technique for list decoding. We show that the latter technique is able to correct more error patterns than the original GS list decoding algorithm. We apply the technique to GRS code over Galois rings and truncated power series rings and show that the algorithms coming from this technique have a lower complexity than the original GS algorithm. We show that it can be easily adapted for interleaved Reed-Solomon codes. Finally we present the complete implementation in C and C++ of the list decoding algorithms studied in this thesis. All the needed subroutines, such as univariate polynomial root finding algorithms, finite fields and rings arithmetic, are also presented. Independently, this manuscript contains other work produced during the thesis. We study quasi cyclic codes in details and show that they are in one-to-one correspondence with left principal ideal of a certain matrix ring. Then we adapt the GS framework for ideal based codes to number fields codes and provide a list decoding algorithm for the latter.Cette thèse porte sur l'algorithmique des techniques de décodage en liste, initiée par Guruswami et Sudan en 1998, dans le contexte des codes de Reed-Solomon sur les anneaux finis. Deux approches sont considérées. Dans un premier temps, nous adaptons l'algorithme de décodage en liste de Guruswami-Sudan aux codes de Reed-Solomon généralisés sur les anneaux finis. Nous étudions en détails les complexités de l'algorithme pour les anneaux de Galois et les anneaux de séries tronquées. Dans un deuxième temps nous approfondissons l'étude d'une technique de remontée pour le décodage en liste. Nous montrons que cette derni're permet de corriger davantage de motifs d'erreurs que la technique de Guruswami-Sudan originale. Nous appliquons ensuite cette même technique aux codes de Reed-Solomon généralisés sur les anneaux de Galois et les anneaux de séries tronquées et obtenons de meilleures bornes de complexités. Enfin nous présentons l'implantation des algorithmes en C et C++ des algorithmes de décodage en liste étudiés au cours de cette thèse. Tous les sous-algorithmes nécessaires au décodage en liste, comme la recherche de racines pour les polynômes univariés, l'arithmétique des corps et anneaux finis sont aussi présentés. Indépendamment, ce manuscrit contient d'autres travaux sur les codes quasi-cycliques. Nous prouvons qu'ils sont en correspondance biunivoque avec les idéaux à gauche d'un certain anneaux de matrices. Enfin nous adaptons le cadre proposé par Guruswami et Sudan pour les codes à base d'ideaux aux codes construits à l'aide des corps de nombres. Nous fournissons un algorithme de décodage en liste dans ce contexte
Applications of Coding Theory to Massive Multiple Access and Big Data Problems
The broad theme of this dissertation is design of schemes that admit iterative algorithms
with low computational complexity to some new problems arising in massive
multiple access and big data. Although bipartite Tanner graphs and low-complexity
iterative algorithms such as peeling and message passing decoders are very popular
in the channel coding literature they are not as widely used in the respective areas
of study and this dissertation serves as an important step in that direction to bridge
that gap. The contributions of this dissertation can be categorized into the following
three parts.
In the first part of this dissertation, a timely and interesting multiple access
problem for a massive number of uncoordinated devices is considered wherein the
base station is interested only in recovering the list of messages without regard to the
identity of the respective sources. A coding scheme with polynomial encoding and
decoding complexities is proposed for this problem, the two main features of which
are (i) design of a close-to-optimal coding scheme for the T-user Gaussian multiple
access channel and (ii) successive interference cancellation decoder. The proposed
coding scheme not only improves on the performance of the previously best known
coding scheme by ≈ 13 dB but is only ≈ 6 dB away from the random Gaussian
coding information rate.
In the second part construction-D lattices are constructed where the underlying
linear codes are nested binary spatially-coupled low-density parity-check codes (SCLDPC)
codes with uniform left and right degrees. It is shown that the proposed
lattices achieve the Poltyrev limit under multistage belief propagation decoding.
Leveraging this result lattice codes constructed from these lattices are applied to the
three user symmetric interference channel. For channel gains within 0.39 dB from
the very strong interference regime, the proposed lattice coding scheme with the
iterative belief propagation decoder, for target error rates of ≈ 10^-5, is only 2:6 dB
away the Shannon limit.
The third part focuses on support recovery in compressed sensing and the nonadaptive
group testing (GT) problems. Prior to this work, sensing schemes based on
left-regular sparse bipartite graphs and iterative recovery algorithms based on peeling
decoder were proposed for the above problems. These schemes require O(K logN)
and Ω(K logK logN) measurements respectively to recover the sparse signal with
high probability (w.h.p), where N, K denote the dimension and sparsity of the signal
respectively (K (double backward arrow) N). Also the number of measurements required to recover
at least (1 - €) fraction of defective items w.h.p (approximate GT) is shown to be
cv€_K logN/K. In this dissertation, instead of the left-regular bipartite graphs, left-and-
right regular bipartite graph based sensing schemes are analyzed. It is shown
that this design strategy enables to achieve superior and sharper results. For the
support recovery problem, the number of measurements is reduced to the optimal
lower bound of
Ω (K log N/K). Similarly for the approximate GT, proposed scheme
only requires c€_K log N/
K measurements. For the probabilistic GT, proposed scheme
requires (K logK log vN/
K) measurements which is only log K factor away from the
best known lower bound of Ω (K log N/
K). Apart from the asymptotic regime, the proposed
schemes also demonstrate significant improvement in the required number of
measurements for finite values of K, N
Applications of Coding Theory to Massive Multiple Access and Big Data Problems
The broad theme of this dissertation is design of schemes that admit iterative algorithms
with low computational complexity to some new problems arising in massive
multiple access and big data. Although bipartite Tanner graphs and low-complexity
iterative algorithms such as peeling and message passing decoders are very popular
in the channel coding literature they are not as widely used in the respective areas
of study and this dissertation serves as an important step in that direction to bridge
that gap. The contributions of this dissertation can be categorized into the following
three parts.
In the first part of this dissertation, a timely and interesting multiple access
problem for a massive number of uncoordinated devices is considered wherein the
base station is interested only in recovering the list of messages without regard to the
identity of the respective sources. A coding scheme with polynomial encoding and
decoding complexities is proposed for this problem, the two main features of which
are (i) design of a close-to-optimal coding scheme for the T-user Gaussian multiple
access channel and (ii) successive interference cancellation decoder. The proposed
coding scheme not only improves on the performance of the previously best known
coding scheme by ≈ 13 dB but is only ≈ 6 dB away from the random Gaussian
coding information rate.
In the second part construction-D lattices are constructed where the underlying
linear codes are nested binary spatially-coupled low-density parity-check codes (SCLDPC)
codes with uniform left and right degrees. It is shown that the proposed
lattices achieve the Poltyrev limit under multistage belief propagation decoding.
Leveraging this result lattice codes constructed from these lattices are applied to the
three user symmetric interference channel. For channel gains within 0.39 dB from
the very strong interference regime, the proposed lattice coding scheme with the
iterative belief propagation decoder, for target error rates of ≈ 10^-5, is only 2:6 dB
away the Shannon limit.
The third part focuses on support recovery in compressed sensing and the nonadaptive
group testing (GT) problems. Prior to this work, sensing schemes based on
left-regular sparse bipartite graphs and iterative recovery algorithms based on peeling
decoder were proposed for the above problems. These schemes require O(K logN)
and Ω(K logK logN) measurements respectively to recover the sparse signal with
high probability (w.h.p), where N, K denote the dimension and sparsity of the signal
respectively (K (double backward arrow) N). Also the number of measurements required to recover
at least (1 - €) fraction of defective items w.h.p (approximate GT) is shown to be
cv€_K logN/K. In this dissertation, instead of the left-regular bipartite graphs, left-and-
right regular bipartite graph based sensing schemes are analyzed. It is shown
that this design strategy enables to achieve superior and sharper results. For the
support recovery problem, the number of measurements is reduced to the optimal
lower bound of
Ω (K log N/K). Similarly for the approximate GT, proposed scheme
only requires c€_K log N/
K measurements. For the probabilistic GT, proposed scheme
requires (K logK log vN/
K) measurements which is only log K factor away from the
best known lower bound of Ω (K log N/
K). Apart from the asymptotic regime, the proposed
schemes also demonstrate significant improvement in the required number of
measurements for finite values of K, N