573 research outputs found

    Achieving a vanishing SNR-gap to exact lattice decoding at a subexponential complexity

    Full text link
    The work identifies the first lattice decoding solution that achieves, in the general outage-limited MIMO setting and in the high-rate and high-SNR limit, both a vanishing gap to the error-performance of the (DMT optimal) exact solution of preprocessed lattice decoding, as well as a computational complexity that is subexponential in the number of codeword bits. The proposed solution employs lattice reduction (LR)-aided regularized (lattice) sphere decoding and proper timeout policies. These performance and complexity guarantees hold for most MIMO scenarios, all reasonable fading statistics, all channel dimensions and all full-rate lattice codes. In sharp contrast to the above manageable complexity, the complexity of other standard preprocessed lattice decoding solutions is shown here to be extremely high. Specifically the work is first to quantify the complexity of these lattice (sphere) decoding solutions and to prove the surprising result that the complexity required to achieve a certain rate-reliability performance, is exponential in the lattice dimensionality and in the number of codeword bits, and it in fact matches, in common scenarios, the complexity of ML-based solutions. Through this sharp contrast, the work was able to, for the first time, rigorously quantify the pivotal role of lattice reduction as a special complexity reducing ingredient. Finally the work analytically refines transceiver DMT analysis which generally fails to address potentially massive gaps between theory and practice. Instead the adopted vanishing gap condition guarantees that the decoder's error curve is arbitrarily close, given a sufficiently high SNR, to the optimal error curve of exact solutions, which is a much stronger condition than DMT optimality which only guarantees an error gap that is subpolynomial in SNR, and can thus be unbounded and generally unacceptable in practical settings.Comment: 16 pages - submission for IEEE Trans. Inform. Theor

    Decoding by Sampling: A Randomized Lattice Algorithm for Bounded Distance Decoding

    Full text link
    Despite its reduced complexity, lattice reduction-aided decoding exhibits a widening gap to maximum-likelihood (ML) performance as the dimension increases. To improve its performance, this paper presents randomized lattice decoding based on Klein's sampling technique, which is a randomized version of Babai's nearest plane algorithm (i.e., successive interference cancelation (SIC)). To find the closest lattice point, Klein's algorithm is used to sample some lattice points and the closest among those samples is chosen. Lattice reduction increases the probability of finding the closest lattice point, and only needs to be run once during pre-processing. Further, the sampling can operate very efficiently in parallel. The technical contribution of this paper is two-fold: we analyze and optimize the decoding radius of sampling decoding resulting in better error performance than Klein's original algorithm, and propose a very efficient implementation of random rounding. Of particular interest is that a fixed gain in the decoding radius compared to Babai's decoding can be achieved at polynomial complexity. The proposed decoder is useful for moderate dimensions where sphere decoding becomes computationally intensive, while lattice reduction-aided decoding starts to suffer considerable loss. Simulation results demonstrate near-ML performance is achieved by a moderate number of samples, even if the dimension is as high as 32

    Full Diversity Unitary Precoded Integer-Forcing

    Full text link
    We consider a point-to-point flat-fading MIMO channel with channel state information known both at transmitter and receiver. At the transmitter side, a lattice coding scheme is employed at each antenna to map information symbols to independent lattice codewords drawn from the same codebook. Each lattice codeword is then multiplied by a unitary precoding matrix P{\bf P} and sent through the channel. At the receiver side, an integer-forcing (IF) linear receiver is employed. We denote this scheme as unitary precoded integer-forcing (UPIF). We show that UPIF can achieve full-diversity under a constraint based on the shortest vector of a lattice generated by the precoding matrix P{\bf P}. This constraint and a simpler version of that provide design criteria for two types of full-diversity UPIF. Type I uses a unitary precoder that adapts at each channel realization. Type II uses a unitary precoder, which remains fixed for all channel realizations. We then verify our results by computer simulations in 2Ă—22\times2, and 4Ă—44\times 4 MIMO using different QAM constellations. We finally show that the proposed Type II UPIF outperform the MIMO precoding X-codes at high data rates.Comment: 12 pages, 8 figures, to appear in IEEE-TW

    On Universal Properties of Capacity-Approaching LDPC Ensembles

    Full text link
    This paper is focused on the derivation of some universal properties of capacity-approaching low-density parity-check (LDPC) code ensembles whose transmission takes place over memoryless binary-input output-symmetric (MBIOS) channels. Properties of the degree distributions, graphical complexity and the number of fundamental cycles in the bipartite graphs are considered via the derivation of information-theoretic bounds. These bounds are expressed in terms of the target block/ bit error probability and the gap (in rate) to capacity. Most of the bounds are general for any decoding algorithm, and some others are proved under belief propagation (BP) decoding. Proving these bounds under a certain decoding algorithm, validates them automatically also under any sub-optimal decoding algorithm. A proper modification of these bounds makes them universal for the set of all MBIOS channels which exhibit a given capacity. Bounds on the degree distributions and graphical complexity apply to finite-length LDPC codes and to the asymptotic case of an infinite block length. The bounds are compared with capacity-approaching LDPC code ensembles under BP decoding, and they are shown to be informative and are easy to calculate. Finally, some interesting open problems are considered.Comment: Published in the IEEE Trans. on Information Theory, vol. 55, no. 7, pp. 2956 - 2990, July 200

    DMT Optimality of LR-Aided Linear Decoders for a General Class of Channels, Lattice Designs, and System Models

    Full text link
    The work identifies the first general, explicit, and non-random MIMO encoder-decoder structures that guarantee optimality with respect to the diversity-multiplexing tradeoff (DMT), without employing a computationally expensive maximum-likelihood (ML) receiver. Specifically, the work establishes the DMT optimality of a class of regularized lattice decoders, and more importantly the DMT optimality of their lattice-reduction (LR)-aided linear counterparts. The results hold for all channel statistics, for all channel dimensions, and most interestingly, irrespective of the particular lattice-code applied. As a special case, it is established that the LLL-based LR-aided linear implementation of the MMSE-GDFE lattice decoder facilitates DMT optimal decoding of any lattice code at a worst-case complexity that grows at most linearly in the data rate. This represents a fundamental reduction in the decoding complexity when compared to ML decoding whose complexity is generally exponential in rate. The results' generality lends them applicable to a plethora of pertinent communication scenarios such as quasi-static MIMO, MIMO-OFDM, ISI, cooperative-relaying, and MIMO-ARQ channels, in all of which the DMT optimality of the LR-aided linear decoder is guaranteed. The adopted approach yields insight, and motivates further study, into joint transceiver designs with an improved SNR gap to ML decoding.Comment: 16 pages, 1 figure (3 subfigures), submitted to the IEEE Transactions on Information Theor

    Fast-Decodable Asymmetric Space-Time Codes from Division Algebras

    Full text link
    Multiple-input double-output (MIDO) codes are important in the near-future wireless communications, where the portable end-user device is physically small and will typically contain at most two receive antennas. Especially tempting is the 4 x 2 channel due to its immediate applicability in the digital video broadcasting (DVB). Such channels optimally employ rate-two space-time (ST) codes consisting of (4 x 4) matrices. Unfortunately, such codes are in general very complex to decode, hence setting forth a call for constructions with reduced complexity. Recently, some reduced complexity constructions have been proposed, but they have mainly been based on different ad hoc methods and have resulted in isolated examples rather than in a more general class of codes. In this paper, it will be shown that a family of division algebra based MIDO codes will always result in at least 37.5% worst-case complexity reduction, while maintaining full diversity and, for the first time, the non-vanishing determinant (NVD) property. The reduction follows from the fact that, similarly to the Alamouti code, the codes will be subsets of matrix rings of the Hamiltonian quaternions, hence allowing simplified decoding. At the moment, such reductions are among the best known for rate-two MIDO codes. Several explicit constructions are presented and shown to have excellent performance through computer simulations.Comment: 26 pages, 1 figure, submitted to IEEE Trans. Inf. Theory, October 201

    Space-time coding techniques with bit-interleaved coded modulations for MIMO block-fading channels

    Full text link
    The space-time bit-interleaved coded modulation (ST-BICM) is an efficient technique to obtain high diversity and coding gain on a block-fading MIMO channel. Its maximum-likelihood (ML) performance is computed under ideal interleaving conditions, which enables a global optimization taking into account channel coding. Thanks to a diversity upperbound derived from the Singleton bound, an appropriate choice of the time dimension of the space-time coding is possible, which maximizes diversity while minimizing complexity. Based on the analysis, an optimized interleaver and a set of linear precoders, called dispersive nucleo algebraic (DNA) precoders are proposed. The proposed precoders have good performance with respect to the state of the art and exist for any number of transmit antennas and any time dimension. With turbo codes, they exhibit a frame error rate which does not increase with frame length.Comment: Submitted to IEEE Trans. on Information Theory, Submission: January 2006 - First review: June 200
    • …
    corecore