167 research outputs found
Coding for Parallel Channels: Gallager Bounds for Binary Linear Codes with Applications to Repeat-Accumulate Codes and Variations
This paper is focused on the performance analysis of binary linear block
codes (or ensembles) whose transmission takes place over independent and
memoryless parallel channels. New upper bounds on the maximum-likelihood (ML)
decoding error probability are derived. These bounds are applied to various
ensembles of turbo-like codes, focusing especially on repeat-accumulate codes
and their recent variations which possess low encoding and decoding complexity
and exhibit remarkable performance under iterative decoding. The framework of
the second version of the Duman and Salehi (DS2) bounds is generalized to the
case of parallel channels, along with the derivation of their optimized tilting
measures. The connection between the generalized DS2 and the 1961 Gallager
bounds, addressed by Divsalar and by Sason and Shamai for a single channel, is
explored in the case of an arbitrary number of independent parallel channels.
The generalization of the DS2 bound for parallel channels enables to re-derive
specific bounds which were originally derived by Liu et al. as special cases of
the Gallager bound. In the asymptotic case where we let the block length tend
to infinity, the new bounds are used to obtain improved inner bounds on the
attainable channel regions under ML decoding. The tightness of the new bounds
for independent parallel channels is exemplified for structured ensembles of
turbo-like codes. The improved bounds with their optimized tilting measures
show, irrespectively of the block length of the codes, an improvement over the
union bound and other previously reported bounds for independent parallel
channels; this improvement is especially pronounced for moderate to large block
lengths.Comment: Submitted to IEEE Trans. on Information Theory, June 2006 (57 pages,
9 figures
LDPC code-based bandwidth efficient coding schemes for wireless communications
This dissertation deals with the design of bandwidth-efficient coding schemes
with Low-Density Parity-Check (LDPC) for reliable wireless communications. Code
design for wireless channels roughly falls into three categories: (1) when channel state
information (CSI) is known only to the receiver (2) more practical case of partial CSI
at the receiver when the channel has to be estimated (3) when CSI is known to the
receiver as well as the transmitter. We consider coding schemes for all the above
categories.
For the first scenario, we describe a bandwidth efficient scheme which uses highorder
constellations such as QAM over both AWGN as well as fading channels. We
propose a simple design with LDPC codes which combines the good properties of
Multi-level Coding (MLC) and bit-interleaved coded-modulation (BICM) schemes.
Through simulations, we show that the proposed scheme performs better than MLC
for short-medium lengths on AWGN and block-fading channels. For the first case,
we also characterize the rate-diversity tradeoff of MIMO-OFDM and SISO-OFDM
systems. We design optimal coding schemes which achieve this tradeoff when transmission
is from a constrained constellation. Through simulations, we show that with
a sub-optimal iterative decoder, the performance of this coding scheme is very close
to the optimal limit for MIMO (flat quasi-static fading), MIMO-OFDM and SISO OFDM systems.
For the second case, we design non-systematic Irregular Repeat Accumulate
(IRA) codes, which are a special class of LDPC codes, for Inter-Symbol Interference
(ISI) fading channels when CSI is estimated at the receiver. We use Orthogonal Frequency
Division Multiplexing (OFDM) to convert the ISI fading channel into parallel
flat fading subchannels. We use a simple receiver structure that performs iterative
channel estimation and decoding and use non-systematic IRA codes that are optimized
for this receiver. This combination is shown to perform very close to a receiver
with perfect CSI and is also shown to be robust to change in the number of channel
taps and Doppler.
For the third case, we look at bandwidth efficient schemes for fading channels
that perform close to capacity when the channel state information is known at the
transmitter as well as the receiver. Schemes that achieve capacity with a Gaussian
codebook for the above system are already known but not for constrained constellations.
We derive the near-optimum scheme to achieve capacity with constrained constellations
and then propose coding schemes which perform close to capacity. Through
linear transformations, a MIMO system can be converted into non-interfering parallel
subchannels and we further extend the proposed coding schemes to the MIMO case
too
An improvement and a fast DSP implementation of the bit flipping algorithms for low density parity check decoder
For low density parity check (LDPC) decoding, hard-decision algorithms are sometimes more suitable than the soft-decision ones. Particularly in the high throughput and high speed applications. However, there exists a considerable gap in performances between these two classes of algorithms in favor of soft-decision algorithms. In order to reduce this gap, in this work we introduce two new improved versions of the hard-decision algorithms, the adaptative gradient descent bit-flipping (AGDBF) and adaptative reliability ratio weighted GDBF (ARRWGDBF). An adaptative weighting and correction factor is introduced in each case to improve the performances of the two algorithms allowing an important gain of bit error rate. As a second contribution of this work a real time implementation of the proposed solutions on a digital signal processors (DSP) is performed in order to optimize and improve the performance of these new approchs. The results of numerical simulations and DSP implementation reveal a faster convergence with a low processing time and a reduction in consumed memory resources when compared to soft-decision algorithms. For the irregular LDPC code, our approachs achieves gains of 0.25 and 0.15 dB respectively for the AGDBF and ARRWGDBF algorithms
Multiple Parallel Concatenated Gallager Codes and Their Applications
Due to the increasing demand of high data rate of modern wireless communications, there is a significant interest in error control coding. It now plays a significant role in digital communication systems in order to overcome the weaknesses in communication channels. This thesis presents a comprehensive investigation of a class of error control codes known as Multiple Parallel Concatenated Gallager Codes (MPCGCs) obtained by the parallel concatenation of well-designed LDPC codes. MPCGCs are constructed by breaking a long and high complexity of conventional single LDPC code into three or four smaller and lower complexity LDPC codes. This design of MPCGCs is simplified as the option of selecting the component codes completely at random based on a single parameter of Mean Column Weight (MCW).
MPCGCs offer flexibility and scope for improving coding performance in theoretical and practical implementation. The performance of MPCGCs is explored by evaluating these codes for both AWGN and flat Rayleigh fading channels and investigating the puncturing of these codes by a proposed novel and efficient puncturing methods for improving the coding performance.
Another investigating in the deployment of MPCGCs by enhancing the performance of WiMAX system. The bit error performances are compared and the results confirm that the proposed MPCGCs-WiMAX based IEEE 802.16 standard physical layer system provides better gain compared to the single conventional LDPC-WiMAX system.
The incorporation of Quasi-Cyclic QC-LDPC codes in the MPCGC structure (called QC-MPCGC) is shown to improve the overall BER performance of MPCGCs with reduced overall decoding complexity and improved flexibility by using Layered belief propagation decoding instead of the sum-product algorithm (SPA).
A proposed MIMO-MPCGC structure with both a 2X2 MIMO and 2X4 MIMO configurations is developed in this thesis and shown to improve the BER performance over fading channels over the conventional LDPC structure
Optical Time-Frequency Packing: Principles, Design, Implementation, and Experimental Demonstration
Time-frequency packing (TFP) transmission provides the highest achievable
spectral efficiency with a constrained symbol alphabet and detector complexity.
In this work, the application of the TFP technique to fiber-optic systems is
investigated and experimentally demonstrated. The main theoretical aspects,
design guidelines, and implementation issues are discussed, focusing on those
aspects which are peculiar to TFP systems. In particular, adaptive compensation
of propagation impairments, matched filtering, and maximum a posteriori
probability detection are obtained by a combination of a butterfly equalizer
and four 8-state parallel Bahl-Cocke-Jelinek-Raviv (BCJR) detectors. A novel
algorithm that ensures adaptive equalization, channel estimation, and a proper
distribution of tasks between the equalizer and BCJR detectors is proposed. A
set of irregular low-density parity-check codes with different rates is
designed to operate at low error rates and approach the spectral efficiency
limit achievable by TFP at different signal-to-noise ratios. An experimental
demonstration of the designed system is finally provided with five
dual-polarization QPSK-modulated optical carriers, densely packed in a 100 GHz
bandwidth, employing a recirculating loop to test the performance of the system
at different transmission distances.Comment: This paper has been accepted for publication in the IEEE/OSA Journal
of Lightwave Technolog
Decoder-in-the-Loop: Genetic Optimization-based LDPC Code Design
LDPC code design tools typically rely on asymptotic code behavior and are
affected by an unavoidable performance degradation due to model imperfections
in the short length regime. We propose an LDPC code design scheme based on an
evolutionary algorithm, the Genetic Algorithm (GenAlg), implementing a
"decoder-in-the-loop" concept. It inherently takes into consideration the
channel, code length and the number of iterations while optimizing the
error-rate of the actual decoder hardware architecture. We construct short
length LDPC codes (i.e., the parity-check matrix) with error-rate performance
comparable to, or even outperforming that of well-designed standardized short
length LDPC codes over both AWGN and Rayleigh fading channels. Our proposed
algorithm can be used to design LDPC codes with special graph structures (e.g.,
accumulator-based codes) to facilitate the encoding step, or to satisfy any
other practical requirement. Moreover, GenAlg can be used to design LDPC codes
with the aim of reducing decoding latency and complexity, leading to coding
gains of up to dB and dB at BLER of for both AWGN and
Rayleigh fading channels, respectively, when compared to state-of-the-art short
LDPC codes. Also, we analyze what can be learned from the resulting codes and,
as such, the GenAlg particularly highlights design paradigms of short length
LDPC codes (e.g., codes with degree-1 variable nodes obtain very good results).Comment: in IEEE Access, 201
System-on-chip Computing and Interconnection Architectures for Telecommunications and Signal Processing
This dissertation proposes novel architectures and design techniques targeting SoC building blocks for telecommunications and signal processing applications.
Hardware implementation of Low-Density Parity-Check decoders is approached at both the algorithmic and the architecture level. Low-Density Parity-Check codes are a promising coding scheme for future communication standards due to their outstanding error correction performance.
This work proposes a methodology for analyzing effects of finite precision arithmetic on error correction performance and hardware complexity. The methodology is throughout employed for co-designing the decoder. First, a low-complexity check node based on the P-output decoding principle is designed and characterized on a CMOS standard-cells library. Results demonstrate implementation loss below 0.2 dB down to BER of 10^{-8} and a saving in complexity up to 59% with respect to other works in recent literature. High-throughput and low-latency issues are addressed with modified single-phase decoding schedules. A new "memory-aware" schedule is proposed requiring down to 20% of memory with respect to the traditional two-phase flooding decoding. Additionally, throughput is doubled and logic complexity reduced of 12%. These advantages are traded-off with error correction performance, thus making the solution attractive only for long codes, as those adopted in the DVB-S2 standard. The "layered decoding" principle is extended to those codes not specifically conceived for this technique. Proposed architectures exhibit complexity savings in the order of 40% for both area and power consumption figures, while implementation loss is smaller than 0.05 dB.
Most modern communication standards employ Orthogonal Frequency Division Multiplexing as part of their physical layer. The core of OFDM is the Fast Fourier Transform and its inverse in charge of symbols (de)modulation. Requirements on throughput and energy efficiency call for FFT hardware implementation, while ubiquity of FFT suggests the design of parametric, re-configurable and re-usable IP hardware macrocells. In this context, this thesis describes an FFT/IFFT core compiler particularly suited for implementation of OFDM communication systems. The tool employs an accuracy-driven configuration engine which automatically profiles the internal arithmetic and generates a core with minimum operands bit-width and thus minimum circuit complexity. The engine performs a closed-loop optimization over three different internal arithmetic models (fixed-point, block floating-point and convergent block floating-point) using the numerical accuracy budget given by the user as a reference point. The flexibility and re-usability of the proposed macrocell are illustrated through several case studies which encompass all current state-of-the-art OFDM communications standards (WLAN, WMAN, xDSL, DVB-T/H, DAB and UWB). Implementations results are presented for two deep sub-micron standard-cells libraries (65 and 90 nm) and commercially available FPGA devices. Compared with other FFT core compilers, the proposed environment produces macrocells with lower circuit complexity and same system level performance (throughput, transform size and numerical accuracy).
The final part of this dissertation focuses on the Network-on-Chip design paradigm whose goal is building scalable communication infrastructures connecting hundreds of core. A low-complexity link architecture for mesochronous on-chip communication is discussed. The link enables skew constraint looseness in the clock tree synthesis, frequency speed-up, power consumption reduction and faster back-end turnarounds. The proposed architecture reaches a maximum clock frequency of 1 GHz on 65 nm low-leakage CMOS standard-cells library. In a complex test case with a full-blown NoC infrastructure, the link overhead is only 3% of chip area and 0.5% of leakage power consumption.
Finally, a new methodology, named metacoding, is proposed. Metacoding generates correct-by-construction technology independent RTL codebases for NoC building blocks. The RTL coding phase is abstracted and modeled with an Object Oriented framework, integrated within a commercial tool for IP packaging (Synopsys CoreTools suite). Compared with traditional coding styles based on pre-processor directives, metacoding produces 65% smaller codebases and reduces the configurations to verify up to three orders of magnitude
- …