402 research outputs found
Decoding Across the Quantum LDPC Code Landscape
We show that belief propagation combined with ordered statistics
post-processing is a general decoder for quantum low density parity check codes
constructed from the hypergraph product. To this end, we run numerical
simulations of the decoder applied to three families of hypergraph product
code: topological codes, fixed-rate random codes and a new class of codes that
we call semi-topological codes. Our new code families share properties of both
topological and random hypergraph product codes, with a construction that
allows for a finely-controlled trade-off between code threshold and stabilizer
locality. Our results indicate thresholds across all three families of
hypergraph product code, and provide evidence of exponential suppression in the
low error regime. For the Toric code, we observe a threshold in the range
. This result improves upon previous quantum decoders based on
belief propagation, and approaches the performance of the minimum weight
perfect matching algorithm. We expect semi-topological codes to have the same
threshold as Toric codes, as they are identical in the bulk, and we present
numerical evidence supporting this observation.Comment: The code for the BP+OSD decoder used in this work can be found on
Github: https://github.com/quantumgizmos/bp_os
A Massively Parallel Implementation of QC-LDPC Decoder on GPU
The graphics processor unit (GPU) is able to provide a low-cost and flexible software-based multi-core architecture for high performance computing. However, it is still very challenging to efficiently map the real-world applications to GPU and fully utilize the computational power of GPU. As a case study, we
present a GPU-based implementation of a real-world digital signal processing (DSP) application: low-density parity-check (LDPC) decoder. The paper shows the efforts we made to map the algorithm onto the massively parallel architecture of GPU and fully utilize GPUâs computational resources to significantly boost the performance. Moreover, several efficient data structures have been proposed to reduce the memory access latency and the memory bandwidth requirement. Experimental results show that the proposed GPU-based LDPC decoding accelerator can take advantage of the multi-core computational power provided by GPU and achieve high throughput up to 100.3Mbps.Renesas MobileTexas InstrumentsXilinxNational Science Foundatio
Modeling and Energy Optimization of LDPC Decoder Circuits with Timing Violations
This paper proposes a "quasi-synchronous" design approach for signal
processing circuits, in which timing violations are permitted, but without the
need for a hardware compensation mechanism. The case of a low-density
parity-check (LDPC) decoder is studied, and a method for accurately modeling
the effect of timing violations at a high level of abstraction is presented.
The error-correction performance of code ensembles is then evaluated using
density evolution while taking into account the effect of timing faults.
Following this, several quasi-synchronous LDPC decoder circuits based on the
offset min-sum algorithm are optimized, providing a 23%-40% reduction in energy
consumption or energy-delay product, while achieving the same performance and
occupying the same area as conventional synchronous circuits.Comment: To appear in IEEE Transactions on Communication
Towards a reconfigurable hardware architecture for implementing a LDPC module suitable for software radio systems
Forward Error Correction is a key piece in modern digital communications. When a signal is transmitted over a noisy channel, multiple errors are generated. FEC techniques are directed towards the recovery of such errors. In last years, LDPC (Low Density Parity Check) codes have attracted attention of researchers because of their excellent error correction capabilities, but for actual radios high performance is not enough since they require to communicate with other multiple radios too. In general, communication between multiple radios requires the use of different standards. In this sense, Software Defined Radio (SDR) approach allows building multi standard radios based on reconfigurability abilities which means that base components including recovery errors block must provide reconfigurable options.
In this paper, some open problems in designing and implementing reconfigurable LDPC components are presented and discussed. Some features of works in the state of the art are commented and possible research lines proposed
Architectures for Code-based Post-Quantum Cryptography
L'abstract eÌ presente nell'allegato / the abstract is in the attachmen
Low-Power 400-Gbps Soft-Decision LDPC FEC for Optical Transport Networks
We present forward error correction systems based on soft-decision low-density parity check (LDPC) codes for applications in 100â400-Gbps optical transport networks. These systems are based on the low-complexity âadaptive degenerationâ decoding algorithm, which we introduce in this paper, along with randomly-structured LDPC codes with block lengths from 30 000 to 60 000 bits and overhead (OH) from 6.7% to 33%. We also construct a 3600-bit prototype LDPC code with 20% overhead, and experimentally show that it has no error floor above a bit error rate (BER) of 10â15 using a field-programmable gate array (FPGA)-based hardware emulator. The projected net coding gain at a BER of 10â15 ranges from 9.6 dB at 6.7% OH to 11.2 dB at 33% OH. We also present application-specific integrated circuit synthesis results for these decoders in 28 nm fully depleted silicon on insulator technology, which show that they are capable of 400-Gbps operation with energy consumption of under 3 pJ per information bit
New low-density-parity-check decoding approach based on the hard and soft decisions algorithms
It is proved that hard decision algorithms are more appropriate than a soft decision for low-density parity-check (LDPC) decoding since they are less complex at the decoding level. On the other hand, it is notable that the soft decision algorithm outperforms the hard decision one in terms of the bit error rate (BER) gap. In order to minimize the BER and the gap between these two families of LDPC codes, a new LDPC decoding algorithm is suggested in this paper, which is based on both the normalized min-sum (NMS) and modified-weighted bit-flipping (MWBF). The proposed algorithm is named normalized min sum- modified weighted bit flipping (NMSMWBF). The MWBF is executed after the NMS algorithm. The simulations show that our algorithm outperforms the NMS in terms of BER at 10-8 over the additive white gaussian noise (AWGN) channel by 0.25 dB. Furthermore, the proposed NMSMWBF and the NMS are both at the same level of decoding difficulty
On performance analysis and implementation issues of iterative decoding for graph based codes
There is no doubt that long random-like code has the potential to achieve good performance because of its excellent distance spectrum. However, these codes remain useless in practical applications due to the lack of decoders rendering good performance at an acceptable complexity. The invention of turbo code marks a milestone progress in channel coding theory in that it achieves near Shannon limit performance by using an elegant iterative decoding algorithm. This great success stimulated intensive research oil long compound codes sharing the same decoding mechanism. Among these long codes are low-density parity-check (LDPC) code and product code, which render brilliant performance. In this work, iterative decoding algorithms for LDPC code and product code are studied in the context of belief propagation.
A large part of this work concerns LDPC code. First the concept of iterative decoding capacity is established in the context of density evolution. Two simulation-based methods approximating decoding capacity are applied to LDPC code. Their effectiveness is evaluated. A suboptimal iterative decoder, Max-Log-MAP algorithm, is also investigated. It has been intensively studied in turbo code but seems to be neglected in LDPC code. The specific density evolution procedure for Max-Log-MAP decoding is developed. The performance of LDPC code with infinite block length is well-predicted using density evolution procedure.
Two implementation issues on iterative decoding of LDPC code are studied. One is the design of a quantized decoder. The other is the influence of mismatched signal-to-noise ratio (SNR) level on decoding performance. The theoretical capacities of the quantized LDPC decoder, under Log-MAP and Max-Log-MAP algorithms, are derived through discretized density evolution. It is indicated that the key point in designing a quantized decoder is to pick a proper dynamic range. Quantization loss in terms of bit error rate (BER) performance could be kept remarkably low, provided that the dynamic range is chosen wisely. The decoding capacity under fixed SNR offset is obtained. The robustness of LDPC code with practical length is evaluated through simulations. It is found that the amount of SNR offset that can be tolerated depends on the code length.
The remaining part of this dissertation deals with iterative decoding of product code. Two issues on iterative decoding of\u27 product code are investigated. One is, \u27improving BER performance by mitigating cycle effects. The other is, parallel decoding structure, which is conceptually better than serial decoding and yields lower decoding latency
Spatially-Coupled LDPC Codes for Decode-and-Forward Relaying of Two Correlated Sources over the BEC
We present a decode-and-forward transmission scheme based on
spatially-coupled low-density parity-check (SC-LDPC) codes for a network
consisting of two (possibly correlated) sources, one relay, and one
destination. The links between the nodes are modeled as binary erasure
channels. Joint source-channel coding with joint channel decoding is used to
exploit the correlation. The relay performs network coding. We derive
analytical bounds on the achievable rates for the binary erasure time-division
multiple-access relay channel with correlated sources. We then design bilayer
SC-LDPC codes and analyze their asymptotic performance for this scenario. We
prove analytically that the proposed coding scheme achieves the theoretical
limit for symmetric channel conditions and uncorrelated sources. Using density
evolution, we furthermore demonstrate that our scheme approaches the
theoretical limit also for non-symmetric channel conditions and when the
sources are correlated, and we observe the threshold saturation effect that is
typical for spatially-coupled systems. Finally, we give simulation results for
large block lengths, which validate the DE analysis.Comment: IEEE Transactions on Communications, to appea
- âŠ