85,489 research outputs found
Near-capacity dirty-paper code design : a source-channel coding approach
This paper examines near-capacity dirty-paper code designs based on source-channel coding. We first point out that the performance loss in signal-to-noise ratio (SNR) in our code designs can be broken into the sum of the packing loss from channel coding and a modulo loss, which is a function of the granular loss from source coding and the target dirty-paper coding rate (or SNR). We then examine practical designs by combining trellis-coded quantization (TCQ) with both systematic and nonsystematic irregular repeat-accumulate (IRA) codes. Like previous approaches, we exploit the extrinsic information transfer (EXIT) chart technique for capacity-approaching IRA code design; but unlike previous approaches, we emphasize the role of strong source coding to achieve as much granular gain as possible using TCQ. Instead of systematic doping, we employ two relatively shifted TCQ codebooks, where the shift is optimized (via tuning the EXIT charts) to facilitate the IRA code design. Our designs synergistically combine TCQ with IRA codes so that they work together as well as they do individually. By bringing together TCQ (the best quantizer from the source coding community) and EXIT chart-based IRA code designs (the best from the channel coding community), we are able to approach the theoretical limit of dirty-paper coding. For example, at 0.25 bit per symbol (b/s), our best code design (with 2048-state TCQ) performs only 0.630 dB away from the Shannon capacity
DMT Optimality of LR-Aided Linear Decoders for a General Class of Channels, Lattice Designs, and System Models
The work identifies the first general, explicit, and non-random MIMO
encoder-decoder structures that guarantee optimality with respect to the
diversity-multiplexing tradeoff (DMT), without employing a computationally
expensive maximum-likelihood (ML) receiver. Specifically, the work establishes
the DMT optimality of a class of regularized lattice decoders, and more
importantly the DMT optimality of their lattice-reduction (LR)-aided linear
counterparts. The results hold for all channel statistics, for all channel
dimensions, and most interestingly, irrespective of the particular lattice-code
applied. As a special case, it is established that the LLL-based LR-aided
linear implementation of the MMSE-GDFE lattice decoder facilitates DMT optimal
decoding of any lattice code at a worst-case complexity that grows at most
linearly in the data rate. This represents a fundamental reduction in the
decoding complexity when compared to ML decoding whose complexity is generally
exponential in rate.
The results' generality lends them applicable to a plethora of pertinent
communication scenarios such as quasi-static MIMO, MIMO-OFDM, ISI,
cooperative-relaying, and MIMO-ARQ channels, in all of which the DMT optimality
of the LR-aided linear decoder is guaranteed. The adopted approach yields
insight, and motivates further study, into joint transceiver designs with an
improved SNR gap to ML decoding.Comment: 16 pages, 1 figure (3 subfigures), submitted to the IEEE Transactions
on Information Theor
Constellation Shaping for WDM systems using 256QAM/1024QAM with Probabilistic Optimization
In this paper, probabilistic shaping is numerically and experimentally
investigated for increasing the transmission reach of wavelength division
multiplexed (WDM) optical communication system employing quadrature amplitude
modulation (QAM). An optimized probability mass function (PMF) of the QAM
symbols is first found from a modified Blahut-Arimoto algorithm for the optical
channel. A turbo coded bit interleaved coded modulation system is then applied,
which relies on many-to-one labeling to achieve the desired PMF, thereby
achieving shaping gain. Pilot symbols at rate at most 2% are used for
synchronization and equalization, making it possible to receive input
constellations as large as 1024QAM. The system is evaluated experimentally on a
10 GBaud, 5 channels WDM setup. The maximum system reach is increased w.r.t.
standard 1024QAM by 20% at input data rate of 4.65 bits/symbol and up to 75% at
5.46 bits/symbol. It is shown that rate adaptation does not require changing of
the modulation format. The performance of the proposed 1024QAM shaped system is
validated on all 5 channels of the WDM signal for selected distances and rates.
Finally, it was shown via EXIT charts and BER analysis that iterative
demapping, while generally beneficial to the system, is not a requirement for
achieving the shaping gain.Comment: 10 pages, 12 figures, Journal of Lightwave Technology, 201
Ultra-Sparse Non-Binary LDPC Codes for Probabilistic Amplitude Shaping
This work shows how non-binary low-density parity-check codes over GF()
can be combined with probabilistic amplitude shaping (PAS) (B\"ocherer, et al.,
2015), which combines forward-error correction with non-uniform signaling for
power-efficient communication. Ultra-sparse low-density parity-check codes over
GF(64) and GF(256) gain 0.6 dB in power efficiency over state-of-the-art binary
LDPC codes at a spectral efficiency of 1.5 bits per channel use and a
blocklength of 576 bits. The simulation results are compared to finite length
coding bounds and complemented by density evolution analysis.Comment: Accepted for Globecom 201
Cyclic division algebras: a tool for space-time coding
Multiple antennas at both the transmitter and receiver ends of a wireless digital transmission channel may increase both data rate and reliability. Reliable high rate transmission over such channels can only be achieved through SpaceâTime coding. Rank and determinant code design criteria have been proposed to enhance diversity and coding gain. The special case of full-diversity criterion requires that the difference of any two distinct codewords has full rank.
Extensive work has been done on SpaceâTime coding, aiming at
finding fully diverse codes with high rate. Division algebras have been proposed as a new tool for constructing SpaceâTime codes, since they are non-commutative algebras that naturally yield linear fully diverse codes. Their algebraic properties can thus be further exploited to
improve the design of good codes.
The aim of this work is to provide a tutorial introduction to the algebraic tools involved in the design of codes based on cyclic division algebras. The different design criteria involved will be illustrated, including the constellation shaping, the information lossless property, the non-vanishing determinant property, and the diversity multiplexing trade-off. The final target is to give the complete mathematical background underlying the construction of the Golden code and the other Perfect SpaceâTime block codes
Probabilistic Shaping for Finite Blocklengths: Distribution Matching and Sphere Shaping
In this paper, we provide for the first time a systematic comparison of
distribution matching (DM) and sphere shaping (SpSh) algorithms for short
blocklength probabilistic amplitude shaping. For asymptotically large
blocklengths, constant composition distribution matching (CCDM) is known to
generate the target capacity-achieving distribution. As the blocklength
decreases, however, the resulting rate loss diminishes the efficiency of CCDM.
We claim that for such short blocklengths and over the additive white Gaussian
channel (AWGN), the objective of shaping should be reformulated as obtaining
the most energy-efficient signal space for a given rate (rather than matching
distributions). In light of this interpretation, multiset-partition DM (MPDM),
enumerative sphere shaping (ESS) and shell mapping (SM), are reviewed as
energy-efficient shaping techniques. Numerical results show that MPDM and SpSh
have smaller rate losses than CCDM. SpSh--whose sole objective is to maximize
the energy efficiency--is shown to have the minimum rate loss amongst all. We
provide simulation results of the end-to-end decoding performance showing that
up to 1 dB improvement in power efficiency over uniform signaling can be
obtained with MPDM and SpSh at blocklengths around 200. Finally, we present a
discussion on the complexity of these algorithms from the perspective of
latency, storage and computations.Comment: 18 pages, 10 figure
- âŚ