2,216 research outputs found
Achievable Information Rates for Coded Modulation with Hard Decision Decoding for Coherent Fiber-Optic Systems
We analyze the achievable information rates (AIRs) for coded modulation
schemes with QAM constellations with both bit-wise and symbol-wise decoders,
corresponding to the case where a binary code is used in combination with a
higher-order modulation using the bit-interleaved coded modulation (BICM)
paradigm and to the case where a nonbinary code over a field matched to the
constellation size is used, respectively. In particular, we consider hard
decision decoding, which is the preferable option for fiber-optic communication
systems where decoding complexity is a concern. Recently, Liga \emph{et al.}
analyzed the AIRs for bit-wise and symbol-wise decoders considering what the
authors called \emph{hard decision decoder} which, however, exploits \emph{soft
information} of the transition probabilities of discrete-input discrete-output
channel resulting from the hard detection. As such, the complexity of the
decoder is essentially the same as the complexity of a soft decision decoder.
In this paper, we analyze instead the AIRs for the standard hard decision
decoder, commonly used in practice, where the decoding is based on the Hamming
distance metric. We show that if standard hard decision decoding is used,
bit-wise decoders yield significantly higher AIRs than symbol-wise decoders. As
a result, contrary to the conclusion by Liga \emph{et al.}, binary decoders
together with the BICM paradigm are preferable for spectrally-efficient
fiber-optic systems. We also design binary and nonbinary staircase codes and
show that, in agreement with the AIRs, binary codes yield better performance.Comment: Published in IEEE/OSA Journal of Lightwave Technology, 201
On performance analysis and implementation issues of iterative decoding for graph based codes
There is no doubt that long random-like code has the potential to achieve good performance because of its excellent distance spectrum. However, these codes remain useless in practical applications due to the lack of decoders rendering good performance at an acceptable complexity. The invention of turbo code marks a milestone progress in channel coding theory in that it achieves near Shannon limit performance by using an elegant iterative decoding algorithm. This great success stimulated intensive research oil long compound codes sharing the same decoding mechanism. Among these long codes are low-density parity-check (LDPC) code and product code, which render brilliant performance. In this work, iterative decoding algorithms for LDPC code and product code are studied in the context of belief propagation.
A large part of this work concerns LDPC code. First the concept of iterative decoding capacity is established in the context of density evolution. Two simulation-based methods approximating decoding capacity are applied to LDPC code. Their effectiveness is evaluated. A suboptimal iterative decoder, Max-Log-MAP algorithm, is also investigated. It has been intensively studied in turbo code but seems to be neglected in LDPC code. The specific density evolution procedure for Max-Log-MAP decoding is developed. The performance of LDPC code with infinite block length is well-predicted using density evolution procedure.
Two implementation issues on iterative decoding of LDPC code are studied. One is the design of a quantized decoder. The other is the influence of mismatched signal-to-noise ratio (SNR) level on decoding performance. The theoretical capacities of the quantized LDPC decoder, under Log-MAP and Max-Log-MAP algorithms, are derived through discretized density evolution. It is indicated that the key point in designing a quantized decoder is to pick a proper dynamic range. Quantization loss in terms of bit error rate (BER) performance could be kept remarkably low, provided that the dynamic range is chosen wisely. The decoding capacity under fixed SNR offset is obtained. The robustness of LDPC code with practical length is evaluated through simulations. It is found that the amount of SNR offset that can be tolerated depends on the code length.
The remaining part of this dissertation deals with iterative decoding of product code. Two issues on iterative decoding of\u27 product code are investigated. One is, \u27improving BER performance by mitigating cycle effects. The other is, parallel decoding structure, which is conceptually better than serial decoding and yields lower decoding latency
Stochastic Digital Backpropagation with Residual Memory Compensation
Stochastic digital backpropagation (SDBP) is an extension of digital
backpropagation (DBP) and is based on the maximum a posteriori principle. SDBP
takes into account noise from the optical amplifiers in addition to handling
deterministic linear and nonlinear impairments. The decisions in SDBP are taken
on a symbol-by-symbol (SBS) basis, ignoring any residual memory, which may be
present due to non-optimal processing in SDBP. In this paper, we extend SDBP to
account for memory between symbols. In particular, two different methods are
proposed: a Viterbi algorithm (VA) and a decision directed approach. Symbol
error rate (SER) for memory-based SDBP is significantly lower than the
previously proposed SBS-SDBP. For inline dispersion-managed links, the VA-SDBP
has up to 10 and 14 times lower SER than DBP for QPSK and 16-QAM, respectively.Comment: 7 pages, accepted to publication in 'Journal of Lightwave Technology
(JLT)
Short Codes with Mismatched Channel State Information: A Case Study
The rising interest in applications requiring the transmission of small
amounts of data has recently lead to the development of accurate performance
bounds and of powerful channel codes for the transmission of short-data packets
over the AWGN channel. Much less is known about the interaction between error
control coding and channel estimation at short blocks when transmitting over
channels with states (e.g., fading channels, phase-noise channels, etc...) for
the setup where no a priori channel state information (CSI) is available at the
transmitter and the receiver. In this paper, we use the mismatched-decoding
framework to characterize the fundamental tradeoff occurring in the
transmission of short data packet over an AWGN channel with unknown gain that
stays constant over the packet. Our analysis for this simplified setup aims at
showing the potential of mismatched decoding as a tool to design and analyze
transmission strategies for short blocks. We focus on a pragmatic approach
where the transmission frame contains a codeword as well as a preamble that is
used to estimate the channel (the codeword symbols are not used for channel
estimation). Achievability and converse bounds on the block error probability
achievable by this approach are provided and compared with simulation results
for schemes employing short low-density parity-check codes. Our bounds turn out
to predict accurately the optimal trade-off between the preamble length and the
redundancy introduced by the channel code.Comment: 5 pages, 5 figures, to appear in Proceedings of the IEEE
International Workshop on Signal Processing Advances in Wireless
Communications (SPAWC 2017
- …