2,231,511 research outputs found

    High-Rate Space-Time Coded Large MIMO Systems: Low-Complexity Detection and Channel Estimation

    Full text link
    In this paper, we present a low-complexity algorithm for detection in high-rate, non-orthogonal space-time block coded (STBC) large-MIMO systems that achieve high spectral efficiencies of the order of tens of bps/Hz. We also present a training-based iterative detection/channel estimation scheme for such large STBC MIMO systems. Our simulation results show that excellent bit error rate and nearness-to-capacity performance are achieved by the proposed multistage likelihood ascent search (M-LAS) detector in conjunction with the proposed iterative detection/channel estimation scheme at low complexities. The fact that we could show such good results for large STBCs like 16x16 and 32x32 STBCs from Cyclic Division Algebras (CDA) operating at spectral efficiencies in excess of 20 bps/Hz (even after accounting for the overheads meant for pilot based training for channel estimation and turbo coding) establishes the effectiveness of the proposed detector and channel estimator. We decode perfect codes of large dimensions using the proposed detector. With the feasibility of such a low-complexity detection/channel estimation scheme, large-MIMO systems with tens of antennas operating at several tens of bps/Hz spectral efficiencies can become practical, enabling interesting high data rate wireless applications.Comment: v3: Performance/complexity comparison of the proposed scheme with other large-MIMO architectures/detectors has been added (Sec. IV-D). The paper has been accepted for publication in IEEE Journal of Selected Topics in Signal Processing (JSTSP): Spl. Iss. on Managing Complexity in Multiuser MIMO Systems. v2: Section V on Channel Estimation is update

    Holographic non-computers

    Full text link
    We introduce the notion of holographic non-computer as a system which exhibits parametrically large delays in the growth of complexity, as calculated within the Complexity-Action proposal. Some known examples of this behavior include extremal black holes and near-extremal hyperbolic black holes. Generic black holes in higher-dimensional gravity also show non-computing features. Within the 1/d1/d expansion of General Relativity, we show that large-dd scalings which capture the qualitative features of complexity, such as a linear growth regime and a plateau at exponentially long times, also exhibit an initial computational delay proportional to dd. While consistent for large AdS black holes, the required `non-computing' scalings are incompatible with thermodynamic stability for Schwarzschild black holes, unless they are tightly caged.Comment: 23 pages, 7 figures. V3: References added. Figures updated. New discussion of small black holes in the canonical ensembl

    Coarse-graining of cellular automata, emergence, and the predictability of complex systems

    Full text link
    We study the predictability of emergent phenomena in complex systems. Using nearest neighbor, one-dimensional Cellular Automata (CA) as an example, we show how to construct local coarse-grained descriptions of CA in all classes of Wolfram's classification. The resulting coarse-grained CA that we construct are capable of emulating the large-scale behavior of the original systems without accounting for small-scale details. Several CA that can be coarse-grained by this construction are known to be universal Turing machines; they can emulate any CA or other computing devices and are therefore undecidable. We thus show that because in practice one only seeks coarse-grained information, complex physical systems can be predictable and even decidable at some level of description. The renormalization group flows that we construct induce a hierarchy of CA rules. This hierarchy agrees well with apparent rule complexity and is therefore a good candidate for a complexity measure and a classification method. Finally we argue that the large scale dynamics of CA can be very simple, at least when measured by the Kolmogorov complexity of the large scale update rule, and moreover exhibits a novel scaling law. We show that because of this large-scale simplicity, the probability of finding a coarse-grained description of CA approaches unity as one goes to increasingly coarser scales. We interpret this large scale simplicity as a pattern formation mechanism in which large scale patterns are forced upon the system by the simplicity of the rules that govern the large scale dynamics.Comment: 18 pages, 9 figure

    Statistical Pruning for Near-Maximum Likelihood Decoding

    Get PDF
    In many communications problems, maximum-likelihood (ML) decoding reduces to finding the closest (skewed) lattice point in N-dimensions to a given point xisin CN. In its full generality, this problem is known to be NP-complete. Recently, the expected complexity of the sphere decoder, a particular algorithm that solves the ML problem exactly, has been computed. An asymptotic analysis of this complexity has also been done where it is shown that the required computations grow exponentially in N for any fixed SNR. At the same time, numerical computations of the expected complexity show that there are certain ranges of rates, SNRs and dimensions N for which the expected computation (counted as the number of scalar multiplications) involves no more than N3 computations. However, when the dimension of the problem grows too large, the required computations become prohibitively large, as expected from the asymptotic exponential complexity. In this paper, we propose an algorithm that, for large N, offers substantial computational savings over the sphere decoder, while maintaining performance arbitrarily close to ML. We statistically prune the search space to a subset that, with high probability, contains the optimal solution, thereby reducing the complexity of the search. Bounds on the error performance of the new method are proposed. The complexity of the new algorithm is analyzed through an upper bound. The asymptotic behavior of the upper bound for large N is also analyzed which shows that the upper bound is also exponential but much lower than the sphere decoder. Simulation results show that the algorithm is much more efficient than the original sphere decoder for smaller dimensions as well, and does not sacrifice much in terms of performance
    corecore