142,973 research outputs found
Achieving a vanishing SNR-gap to exact lattice decoding at a subexponential complexity
The work identifies the first lattice decoding solution that achieves, in the
general outage-limited MIMO setting and in the high-rate and high-SNR limit,
both a vanishing gap to the error-performance of the (DMT optimal) exact
solution of preprocessed lattice decoding, as well as a computational
complexity that is subexponential in the number of codeword bits. The proposed
solution employs lattice reduction (LR)-aided regularized (lattice) sphere
decoding and proper timeout policies. These performance and complexity
guarantees hold for most MIMO scenarios, all reasonable fading statistics, all
channel dimensions and all full-rate lattice codes.
In sharp contrast to the above manageable complexity, the complexity of other
standard preprocessed lattice decoding solutions is shown here to be extremely
high. Specifically the work is first to quantify the complexity of these
lattice (sphere) decoding solutions and to prove the surprising result that the
complexity required to achieve a certain rate-reliability performance, is
exponential in the lattice dimensionality and in the number of codeword bits,
and it in fact matches, in common scenarios, the complexity of ML-based
solutions. Through this sharp contrast, the work was able to, for the first
time, rigorously quantify the pivotal role of lattice reduction as a special
complexity reducing ingredient.
Finally the work analytically refines transceiver DMT analysis which
generally fails to address potentially massive gaps between theory and
practice. Instead the adopted vanishing gap condition guarantees that the
decoder's error curve is arbitrarily close, given a sufficiently high SNR, to
the optimal error curve of exact solutions, which is a much stronger condition
than DMT optimality which only guarantees an error gap that is subpolynomial in
SNR, and can thus be unbounded and generally unacceptable in practical
settings.Comment: 16 pages - submission for IEEE Trans. Inform. Theor
Statistical Pruning for Near-Maximum Likelihood Decoding
In many communications problems, maximum-likelihood (ML) decoding reduces to finding the closest (skewed) lattice point in N-dimensions to a given point xisin CN. In its full generality, this problem is known to be NP-complete. Recently, the expected complexity of the sphere decoder, a particular algorithm that solves the ML problem exactly, has been computed. An asymptotic analysis of this complexity has also been done where it is shown that the required computations grow exponentially in N for any fixed SNR. At the same time, numerical computations of the expected complexity show that there are certain ranges of rates, SNRs and dimensions N for which the expected computation (counted as the number of scalar multiplications) involves no more than N3 computations. However, when the dimension of the problem grows too large, the required computations become prohibitively large, as expected from the asymptotic exponential complexity. In this paper, we propose an algorithm that, for large N, offers substantial computational savings over the sphere decoder, while maintaining performance arbitrarily close to ML. We statistically prune the search space to a subset that, with high probability, contains the optimal solution, thereby reducing the complexity of the search. Bounds on the error performance of the new method are proposed. The complexity of the new algorithm is analyzed through an upper bound. The asymptotic behavior of the upper bound for large N is also analyzed which shows that the upper bound is also exponential but much lower than the sphere decoder. Simulation results show that the algorithm is much more efficient than the original sphere decoder for smaller dimensions as well, and does not sacrifice much in terms of performance
- …