128 research outputs found
Nearest Neighbour Decoding and Pilot-Aided Channel Estimation in Stationary Gaussian Flat-Fading Channels
We study the information rates of non-coherent, stationary, Gaussian,
multiple-input multiple-output (MIMO) flat-fading channels that are achievable
with nearest neighbour decoding and pilot-aided channel estimation. In
particular, we analyse the behaviour of these achievable rates in the limit as
the signal-to-noise ratio (SNR) tends to infinity. We demonstrate that nearest
neighbour decoding and pilot-aided channel estimation achieves the capacity
pre-log - which is defined as the limiting ratio of the capacity to the
logarithm of SNR as the SNR tends to infinity - of non-coherent multiple-input
single-output (MISO) flat-fading channels, and it achieves the best so far
known lower bound on the capacity pre-log of non-coherent MIMO flat-fading
channels.Comment: 5 pages, 1 figure. To be presented at the IEEE International
Symposium on Information Theory (ISIT), St. Petersburg, Russia, 2011.
Replaced with version that will appear in the proceeding
Bit-interleaved coded modulation in the wideband regime
The wideband regime of bit-interleaved coded modulation (BICM) in Gaussian
channels is studied. The Taylor expansion of the coded modulation capacity for
generic signal constellations at low signal-to-noise ratio (SNR) is derived and
used to determine the corresponding expansion for the BICM capacity. Simple
formulas for the minimum energy per bit and the wideband slope are given. BICM
is found to be suboptimal in the sense that its minimum energy per bit can be
larger than the corresponding value for coded modulation schemes. The minimum
energy per bit using standard Gray mapping on M-PAM or M^2-QAM is given by a
simple formula and shown to approach -0.34 dB as M increases. Using the low SNR
expansion, a general trade-off between power and bandwidth in the wideband
regime is used to show how a power loss can be traded off against a bandwidth
gain.Comment: Submitted to IEEE Transactions on Information Theor
Bit-Interleaved Coded Modulation Revisited: A Mismatched Decoding Perspective
We revisit the information-theoretic analysis of bit-interleaved coded
modulation (BICM) by modeling the BICM decoder as a mismatched decoder. The
mismatched decoding model is well-defined for finite, yet arbitrary, block
lengths, and naturally captures the channel memory among the bits belonging to
the same symbol. We give two independent proofs of the achievability of the
BICM capacity calculated by Caire et al. where BICM was modeled as a set of
independent parallel binary-input channels whose output is the bitwise
log-likelihood ratio. Our first achievability proof uses typical sequences, and
shows that due to the random coding construction, the interleaver is not
required. The second proof is based on the random coding error exponents with
mismatched decoding, where the largest achievable rate is the generalized
mutual information. We show that the generalized mutual information of the
mismatched decoder coincides with the infinite-interleaver BICM capacity. We
also show that the error exponent -and hence the cutoff rate- of the BICM
mismatched decoder is upper bounded by that of coded modulation and may thus be
lower than in the infinite-interleaved model. We also consider the mutual
information appearing in the analysis of iterative decoding of BICM with EXIT
charts. We show that the corresponding symbol metric has knowledge of the
transmitted symbol and the EXIT mutual information admits a representation as a
pseudo-generalized mutual information, which is in general not achievable. A
different symbol decoding metric, for which the extrinsic side information
refers to the hypothesized symbol, induces a generalized mutual information
lower than the coded modulation capacity.Comment: submitted to the IEEE Transactions on Information Theory. Conference
version in 2008 IEEE International Symposium on Information Theory, Toronto,
Canada, July 200
Low-Density Parity-Check Codes for Nonergodic Block-Fading Channels
We solve the problem of designing powerful low-density parity-check (LDPC)
codes with iterative decoding for the block-fading channel. We first study the
case of maximum-likelihood decoding, and show that the design criterion is
rather straightforward. Unfortunately, optimal constructions for
maximum-likelihood decoding do not perform well under iterative decoding. To
overcome this limitation, we then introduce a new family of full-diversity LDPC
codes that exhibit near-outage-limit performance under iterative decoding for
all block-lengths. This family competes with multiplexed parallel turbo codes
suitable for nonergodic channels and recently reported in the literature.Comment: Submitted to the IEEE Transactions on Information Theor
A Single-Letter Upper Bound to the Mismatch Capacity
We derive a single-letter upper bound to the mismatched-decoding capacity for
discrete memoryless channels. The bound is expressed as the mutual information
of a transformation of the channel, such that a maximum-likelihood decoding
error on the translated channel implies a mismatched-decoding error in the
original channel. In particular, a strong converse is shown to hold for this
upper-bound: if the rate exceeds the upper-bound, the probability of error
tends to 1 exponentially when the block-length tends to infinity. We also show
that the underlying optimization problem is a convex-concave problem and that
an efficient iterative algorithm converges to the optimal solution. In
addition, we show that, unlike achievable rates in the literature, the
multiletter version of the bound does not improve. A number of examples are
discussed throughout the paper.European Research Council under Grant 725411, and by the Spanish Ministry of Economy and Competitiveness under Grant TEC2016-78434-C3-1-R
Generalized Random Gilbert-Varshamov Codes
© 1963-2012 IEEE. We introduce a random coding technique for transmission over discrete memoryless channels, reminiscent of the basic construction attaining the Gilbert-Varshamov bound for codes in Hamming spaces. The code construction is based on drawing codewords recursively from a fixed type class, in such a way that a newly generated codeword must be at a certain minimum distance from all previously chosen codewords, according to some generic distance function. We derive an achievable error exponent for this construction and prove its tightness with respect to the ensemble average. We show that the exponent recovers the Csiszár and Körner exponent as a special case, which is known to be at least as high as both the random-coding and expurgated exponents, and we establish the optimality of certain choices of the distance function. In addition, for additive distances and decoding metrics, we present an equivalent dual expression, along with a generalization to infinite alphabets via cost-constrained random coding.ER
- …