27 research outputs found
Noisy Gradient Descent Bit-Flip Decoding for LDPC Codes
A modified Gradient Descent Bit Flipping (GDBF) algorithm is proposed for
decoding Low Density Parity Check (LDPC) codes on the binary-input additive
white Gaussian noise channel. The new algorithm, called Noisy GDBF (NGDBF),
introduces a random perturbation into each symbol metric at each iteration. The
noise perturbation allows the algorithm to escape from undesirable local
maxima, resulting in improved performance. A combination of heuristic
improvements to the algorithm are proposed and evaluated. When the proposed
heuristics are applied, NGDBF performs better than any previously reported GDBF
variant, and comes within 0.5 dB of the belief propagation algorithm for
several tested codes. Unlike other previous GDBF algorithms that provide an
escape from local maxima, the proposed algorithm uses only local, fully
parallelizable operations and does not require computing a global objective
function or a sort over symbol metrics, making it highly efficient in
comparison. The proposed NGDBF algorithm requires channel state information
which must be obtained from a signal to noise ratio (SNR) estimator.
Architectural details are presented for implementing the NGDBF algorithm.
Complexity analysis and optimizations are also discussed.Comment: 16 pages, 22 figures, 2 table
An improvement and a fast DSP implementation of the bit flipping algorithms for low density parity check decoder
For low density parity check (LDPC) decoding, hard-decision algorithms are sometimes more suitable than the soft-decision ones. Particularly in the high throughput and high speed applications. However, there exists a considerable gap in performances between these two classes of algorithms in favor of soft-decision algorithms. In order to reduce this gap, in this work we introduce two new improved versions of the hard-decision algorithms, the adaptative gradient descent bit-flipping (AGDBF) and adaptative reliability ratio weighted GDBF (ARRWGDBF). An adaptative weighting and correction factor is introduced in each case to improve the performances of the two algorithms allowing an important gain of bit error rate. As a second contribution of this work a real time implementation of the proposed solutions on a digital signal processors (DSP) is performed in order to optimize and improve the performance of these new approchs. The results of numerical simulations and DSP implementation reveal a faster convergence with a low processing time and a reduction in consumed memory resources when compared to soft-decision algorithms. For the irregular LDPC code, our approachs achieves gains of 0.25 and 0.15 dB respectively for the AGDBF and ARRWGDBF algorithms
Gradient Flow Decoding for LDPC Codes
The power consumption of the integrated circuit is becoming a significant
burden, particularly for large-scale signal processing tasks requiring high
throughput. The decoding process of LDPC codes is such a heavy signal
processing task that demands power efficiency and higher decoding throughput. A
promising approach to reducing both power and latency of a decoding process is
to use an analog circuit instead of a digital circuit. This paper investigates
a continuous-time gradient flow-based approach for decoding LDPC codes, which
employs a potential energy function similar to the objective function used in
the gradient descent bit flipping (GDBF) algorithm. We experimentally
demonstrate that the decoding performance of the gradient flow decoding is
comparable to that of the multi-bit mode GDBF algorithm. Since an analog
circuit of the gradient flow decoding requires only analog arithmetic
operations and an integrator, future advancements in programmable analog
integrated circuits may make practical implementation feasible.Comment: 6 page
Recommended from our members
Low-Density Parity-Check Code Decoder Design and Error Characterization on an FPGA Based Framework
Low-Density Parity-Check (LDPC) codes have gained popularity in communication systems and standards due to their capacity approaching error correction performance. Among all the hard-decision based LDPC decoders, Gallager B (GaB), due to simplicity of its operations, poses as the most hardware friendly algorithm and an attractive solution for meeting the high-throughput demand in communication systems. However, GaB sufferers from poor error correction performance. In this work, we first propose a resource efficient GaB hardware architecture that delivers the best throughput while using fewest Field Programmable Gate Array (FPGA) resources with respect to the state of the art comparable LDPC decoding algorithms. We then introduce a Probabilistic GaB (PGaB) algorithm that disturbs the decisions made during the decoding iterations randomly with a probability value determined based on experimental studies. We achieve up to four orders of magnitude better error correction performance than the GaB with a 3.4% improvement in normalized throughput performance. PGaB requires around 40% less energy than GaB as the probabilistic execution results with reducing the average iteration count by up to 62% compared to the GaB. We also show that our PGaB consistently results with an improvement in maximum operational clock rate compared to the state of the art implementations.
In this dissertation, we also present a high throughput FPGA based framework to accelerate error characterization of the LDPC codes. Our flexible framework allows the end user adjust the simulation parameters and rapidly study various LDPC codes and decoders. We first show that the connection intensive bipartite graph based LDPC decoder hardware architecture creates routing stress for longer codewords that are utilized in today's communications systems and standards. We address this problem by partitioning each processing element (PE) in the bipartite graph in such a way that the inputs of a PE are evenly distributed over its partitions. This allows depopulating the Loo Up Table (LUT) resources utilized for the decoder architecture by spreading the logic across the FPGA. We show that even though LUT usage increases, critical path delay reduces with the depopulation. More importantly, with the depopulation technique an unroutable design becomes routable, which allows longer codewords to be mapped on the FPGA. We then conduct two experiments on error correction performance analysis for the GaB and PGaB algorithms, demonstrate our framework's ability to reach a resolution level that is not attainable with general purpose processor (GPP) based simulations, which reduces the time scale of simulations to 24 hours from an estimated 199 years. We finally conduct the first study on identifying all possible codewords that are not correctable by the GaB for the case where a codeword has four errors. We reduce the time scale of this simulation that requires processing 117 billion codewords to 4 hours and 38 minutes with our framework from an estimated 7800 days on a single GPP
Linear-time encoding and decoding of low-density parity-check codes
Low-density parity-check (LDPC) codes had a renaissance when they were rediscovered in the 1990âs. Since then LDPC codes have been an important part of the field of error-correcting codes, and have been shown to be able to approach the Shannon capacity, the limit at which we can reliably transmit information over noisy channels. Following this, many modern communications standards have adopted LDPC codes. Error-correction is equally important in protecting data from corruption on a hard-drive as it is in deep-space communications. It is most commonly used for example for reliable wireless transmission of data to mobile devices. For practical purposes, both encoding and decoding need to be of low complexity to achieve high throughput and low power consumption.
This thesis provides a literature review of the current state-of-the-art in encoding and decoding of LDPC codes. Message- passing decoders are still capable of achieving the best error-correcting performance, while more recently considered bit-flipping decoders are providing a low-complexity alternative, albeit with some loss in error-correcting performance. An implementation of a low-complexity stochastic bit-flipping decoder is also presented. It is implemented for Graphics Processing Units (GPUs) in a parallel fashion, providing a peak throughput of 1.2 Gb/s, which is significantly higher than previous decoder implementations on GPUs. The error-correcting performance of a range of decoders has also been tested, showing that the stochastic bit-flipping decoder provides relatively good error-correcting performance with low complexity. Finally, a brief comparison of encoding complexities for two code ensembles is also presented
Advanced channel coding techniques using bit-level soft information
In this dissertation, advanced channel decoding techniques based on bit-level soft information are studied. Two main approaches are proposed: bit-level probabilistic iterative decoding and bit-level algebraic soft-decision (list) decoding (ASD).
In the ïŹrst part of the dissertation, we ïŹrst study iterative decoding for high density parity check (HDPC) codes. An iterative decoding algorithm, which uses the sum product algorithm (SPA) in conjunction with a binary parity check matrix adapted in each decoding iteration according to the bit-level reliabilities is proposed. In contrast to the common belief that iterative decoding is not suitable for HDPC codes, this bit-level reliability based adaptation procedure is critical to the conver-gence behavior of iterative decoding for HDPC codes and it signiïŹcantly improves the iterative decoding performance of Reed-Solomon (RS) codes, whose parity check matrices are in general not sparse. We also present another iterative decoding scheme for cyclic codes by randomly shifting the bit-level reliability values in each iteration. The random shift based adaptation can also prevent iterative decoding from getting stuck with a significant complexity reduction compared with the reliability based parity check matrix adaptation and still provides reasonable good performance for short-length cyclic codes.
In the second part of the dissertation, we investigate ASD for RS codes using bit-level soft information. In particular, we show that by carefully incorporating bitÂŹlevel soft information in the multiplicity assignment and the interpolation step, ASD can significantly outperform conventional hard decision decoding (HDD) for RS codes with a very small amount of complexity, even though the kernel of ASD is operating at the symbol-level. More importantly, the performance of the proposed bit-level ASD can be tightly upper bounded for practical high rate RS codes, which is in general not possible for other popular ASD schemes.
Bit-level soft-decision decoding (SDD) serves as an eïŹcient way to exploit the potential gain of many classical codes, and also facilitates the corresponding per-formance analysis. The proposed bit-level SDD schemes are potential and feasible alternatives to conventional symbol-level HDD schemes in many communication sys-tems
Learning Maximum Margin Channel Decoders
The problem of learning a channel decoder is considered for two channel
models. The first model is an additive noise channel whose noise distribution
is unknown and nonparametric. The learner is provided with a fixed codebook and
a dataset comprised of independent samples of the noise, and is required to
select a precision matrix for a nearest neighbor decoder in terms of the
Mahalanobis distance. The second model is a non-linear channel with additive
white Gaussian noise and unknown channel transformation. The learner is
provided with a fixed codebook and a dataset comprised of independent
input-output samples of the channel, and is required to select a matrix for a
nearest neighbor decoder with a linear kernel. For both models, the objective
of maximizing the margin of the decoder is addressed. Accordingly, for each
channel model, a regularized loss minimization problem with a codebook-related
regularization term and hinge-like loss function is developed, which is
inspired by the support vector machine paradigm for classification problems.
Expected generalization error bounds for the error probability loss function
are provided for both models, under optimal choice of the regularization
parameter. For the additive noise channel, a theoretical guidance for choosing
the training signal-to-noise ratio is proposed based on this bound. In
addition, for the non-linear channel, a high probability uniform generalization
error bound is provided for the hypothesis class. For each channel, a
stochastic sub-gradient descent algorithm for solving the regularized loss
minimization problem is proposed, and an optimization error bound is stated.
The performance of the proposed algorithms is demonstrated through several
examples
Federated Learning in Wireless Networks
Artificial intelligence (AI) is transitioning from a long development period into reality. Notable instances like AlphaGo, Teslaâs self-driving cars, and the recent innovation of ChatGPT stand as widely recognized exemplars of AI applications. These examples collectively enhance the quality of human life. An increasing number of AI applications are expected to integrate seamlessly into our daily lives, further enriching our experiences.
Although AI has demonstrated remarkable performance, it is accompanied by numerous challenges. At the forefront of AIâs advancement lies machine learning (ML), a cutting-edge technique that acquires knowledge by emulating the human brainâs cognitive processes. Like humans, ML requires a substantial amount of data to build its knowledge repository. Computational capabilities have surged in alignment with Mooreâs law, leading to the realization of cloud computing services like Amazon AWS. Presently, we find ourselves in the era of the IoT, characterized by the ubiquitous presence of smartphones, smart speakers, and intelligent vehicles. This landscape facilitates decentralizing data processing tasks, shifting them from the cloud to local devices. At the same time, a growing emphasis on privacy protection has emerged, as individuals are increasingly concerned with sharing personal data with corporate giants such as Google and Meta. Federated learning (FL) is a new distributed machine learning paradigm. It fosters a scenario where clients collaborate by sharing learned models rather than raw data, thus safeguarding client data privacy while providing a collaborative and resilient model.
FL has promised to address privacy concerns. However, it still faces many challenges, particularly within wireless networks. Within the FL landscape, four main challenges stand out: high communication costs, system heterogeneity, statistical heterogeneity, and privacy and security. When many clients participate in the learning process, and the wireless communication resources remain constrained, accommodating all participating clients becomes very complex. The contemporary realm of deep learning relies on models encompassing millions and, in some cases, billions of parameters, exacerbating communication overhead when transmitting these parameters. The heterogeneity of the system manifests itself across device disparities, deployment scenarios, and connectivity capabilities. Simultaneously, statistical heterogeneity encompasses variations in data distribution and model composition. Furthermore, the distributed architecture makes FL susceptible to attacks inside and outside the system.
This dissertation presents a suite of algorithms designed to address the challenges effectively. Mew communication schemes are introduced, including Non-Orthogonal Multiple Access (NOMA), over-the-air computation, and approximate communication. These techniques are coupled with gradient compression, client scheduling, and power allocation, each significantly mitigating communication overhead. Implementing asynchronous FL is a suitable remedy to solve the intricate issue of system heterogeneity. Independent and identically distributed (IID) and non-IID data in statistical heterogeneity are considered in all scenarios. Finally, the aggregation of model updates and individual client model initialization collaboratively address security and privacy issues
Spectrally and Energy Efficient Wireless Communications: Signal and System Design, Mathematical Modelling and Optimisation
This thesis explores engineering studies and designs aiming to meeting the requirements of enhancing capacity and energy efficiency for next generation communication networks. Challenges of spectrum scarcity and energy constraints are addressed and new technologies are proposed, analytically investigated and examined.
The thesis commences by reviewing studies on spectrally and energy-efficient techniques, with a special focus on non-orthogonal multicarrier modulation, particularly spectrally efficient frequency division multiplexing (SEFDM). Rigorous theoretical and mathematical modelling studies of SEFDM are presented. Moreover, to address the potential application of SEFDM under the 5th generation new radio (5G NR) heterogeneous numerologies, simulation-based studies of SEFDM coexisting with orthogonal frequency division multiplexing (OFDM) are conducted. New signal formats and corresponding transceiver structure are designed, using a Hilbert transform filter pair for shaping pulses. Detailed modelling and numerical investigations show that the proposed signal doubles spectral efficiency without performance degradation, with studies of two signal formats; uncoded narrow-band internet of things (NB-IoT) signals and unframed turbo coded multi-carrier signals. The thesis also considers using constellation shaping techniques and SEFDM for capacity enhancement in 5G system. Probabilistic shaping for SEFDM is proposed and modelled to show both transmission energy reduction and bandwidth saving with advantageous flexibility for data rate adaptation. Expanding on constellation shaping to improve performance further, a comparative study of multidimensional modulation techniques is carried out. A four-dimensional signal, with better noise immunity is investigated, for which metaheuristic optimisation algorithms are studied, developed, and conducted to optimise bit-to-symbol mapping. Finally, a specially designed machine learning technique for signal and system design in physical layer communications is proposed, utilising the application of autoencoder-based end-to-end learning. Multidimensional signal modulation with multidimensional constellation shaping is proposed and optimised by using machine learning techniques, demonstrating significant improvement in spectral and energy efficiencies