1,117 research outputs found
Constellation Shaping for WDM systems using 256QAM/1024QAM with Probabilistic Optimization
In this paper, probabilistic shaping is numerically and experimentally
investigated for increasing the transmission reach of wavelength division
multiplexed (WDM) optical communication system employing quadrature amplitude
modulation (QAM). An optimized probability mass function (PMF) of the QAM
symbols is first found from a modified Blahut-Arimoto algorithm for the optical
channel. A turbo coded bit interleaved coded modulation system is then applied,
which relies on many-to-one labeling to achieve the desired PMF, thereby
achieving shaping gain. Pilot symbols at rate at most 2% are used for
synchronization and equalization, making it possible to receive input
constellations as large as 1024QAM. The system is evaluated experimentally on a
10 GBaud, 5 channels WDM setup. The maximum system reach is increased w.r.t.
standard 1024QAM by 20% at input data rate of 4.65 bits/symbol and up to 75% at
5.46 bits/symbol. It is shown that rate adaptation does not require changing of
the modulation format. The performance of the proposed 1024QAM shaped system is
validated on all 5 channels of the WDM signal for selected distances and rates.
Finally, it was shown via EXIT charts and BER analysis that iterative
demapping, while generally beneficial to the system, is not a requirement for
achieving the shaping gain.Comment: 10 pages, 12 figures, Journal of Lightwave Technology, 201
On the BICM Capacity
Optimal binary labelings, input distributions, and input alphabets are
analyzed for the so-called bit-interleaved coded modulation (BICM) capacity,
paying special attention to the low signal-to-noise ratio (SNR) regime. For
8-ary pulse amplitude modulation (PAM) and for 0.75 bit/symbol, the folded
binary code results in a higher capacity than the binary reflected gray code
(BRGC) and the natural binary code (NBC). The 1 dB gap between the additive
white Gaussian noise (AWGN) capacity and the BICM capacity with the BRGC can be
almost completely removed if the input symbol distribution is properly
selected. First-order asymptotics of the BICM capacity for arbitrary input
alphabets and distributions, dimensions, mean, variance, and binary labeling
are developed. These asymptotics are used to define first-order optimal (FOO)
constellations for BICM, i.e. constellations that make BICM achieve the Shannon
limit -1.59 \tr{dB}. It is shown that the \Eb/N_0 required for reliable
transmission at asymptotically low rates in BICM can be as high as infinity,
that for uniform input distributions and 8-PAM there are only 72 classes of
binary labelings with a different first-order asymptotic behavior, and that
this number is reduced to only 26 for 8-ary phase shift keying (PSK). A general
answer to the question of FOO constellations for BICM is also given: using the
Hadamard transform, it is found that for uniform input distributions, a
constellation for BICM is FOO if and only if it is a linear projection of a
hypercube. A constellation based on PAM or quadrature amplitude modulation
input alphabets is FOO if and only if they are labeled by the NBC; if the
constellation is based on PSK input alphabets instead, it can never be FOO if
the input alphabet has more than four points, regardless of the labeling.Comment: Submitted to the IEEE Transactions on Information Theor
Replacing the Soft FEC Limit Paradigm in the Design of Optical Communication Systems
The FEC limit paradigm is the prevalent practice for designing optical
communication systems to attain a certain bit-error rate (BER) without forward
error correction (FEC). This practice assumes that there is an FEC code that
will reduce the BER after decoding to the desired level. In this paper, we
challenge this practice and show that the concept of a channel-independent FEC
limit is invalid for soft-decision bit-wise decoding. It is shown that for low
code rates and high order modulation formats, the use of the soft FEC limit
paradigm can underestimate the spectral efficiencies by up to 20%. A better
predictor for the BER after decoding is the generalized mutual information,
which is shown to give consistent post-FEC BER predictions across different
channel conditions and modulation formats. Extensive optical full-field
simulations and experiments are carried out in both the linear and nonlinear
transmission regimes to confirm the theoretical analysis
Ultra-Sparse Non-Binary LDPC Codes for Probabilistic Amplitude Shaping
This work shows how non-binary low-density parity-check codes over GF()
can be combined with probabilistic amplitude shaping (PAS) (B\"ocherer, et al.,
2015), which combines forward-error correction with non-uniform signaling for
power-efficient communication. Ultra-sparse low-density parity-check codes over
GF(64) and GF(256) gain 0.6 dB in power efficiency over state-of-the-art binary
LDPC codes at a spectral efficiency of 1.5 bits per channel use and a
blocklength of 576 bits. The simulation results are compared to finite length
coding bounds and complemented by density evolution analysis.Comment: Accepted for Globecom 201
Throughput-based Design for Polar Coded-Modulation
Typically, forward error correction (FEC) codes are designed based on the
minimization of the error rate for a given code rate. However, for applications
that incorporate hybrid automatic repeat request (HARQ) protocol and adaptive
modulation and coding, the throughput is a more important performance metric
than the error rate. Polar codes, a new class of FEC codes with simple rate
matching, can be optimized efficiently for maximization of the throughput. In
this paper, we aim to design HARQ schemes using multilevel polar
coded-modulation (MLPCM). Thus, we first develop a method to determine a
set-partitioning based bit-to-symbol mapping for high order QAM constellations.
We simplify the LLR estimation of set-partitioned QAM constellations for a
multistage decoder, and we introduce a set of algorithms to design
throughput-maximizing MLPCM for the successive cancellation decoding (SCD).
These codes are specifically useful for non-combining (NC) and Chase-combining
(CC) HARQ protocols. Furthermore, since optimized codes for SCD are not optimal
for SC list decoders (SCLD), we propose a rate matching algorithm to find the
best rate for SCLD while using the polar codes optimized for SCD. The resulting
codes provide throughput close to the capacity with low decoding complexity
when used with NC or CC HARQ
Turbo EP-based Equalization: a Filter-Type Implementation
This manuscript has been submitted to Transactions on Communications on
September 7, 2017; revised on January 10, 2018 and March 27, 2018; and accepted
on April 25, 2018
We propose a novel filter-type equalizer to improve the solution of the
linear minimum-mean squared-error (LMMSE) turbo equalizer, with computational
complexity constrained to be quadratic in the filter length. When high-order
modulations and/or large memory channels are used the optimal BCJR equalizer is
unavailable, due to its computational complexity. In this scenario, the
filter-type LMMSE turbo equalization exhibits a good performance compared to
other approximations. In this paper, we show that this solution can be
significantly improved by using expectation propagation (EP) in the estimation
of the a posteriori probabilities. First, it yields a more accurate estimation
of the extrinsic distribution to be sent to the channel decoder. Second,
compared to other solutions based on EP the computational complexity of the
proposed solution is constrained to be quadratic in the length of the finite
impulse response (FIR). In addition, we review previous EP-based turbo
equalization implementations. Instead of considering default uniform priors we
exploit the outputs of the decoder. Some simulation results are included to
show that this new EP-based filter remarkably outperforms the turbo approach of
previous versions of the EP algorithm and also improves the LMMSE solution,
with and without turbo equalization
- …