55 research outputs found

    High Throughput VLSI Architecture for Soft-Output MIMO Detection Based on A Greedy Graph Algorithm

    Get PDF
    Maximum-likelihood (ML) decoding is a very computational- intensive task for multiple-input multiple-output (MIMO) wireless channel detection. This paper presents a new graph based algorithm to achieve near ML performance for soft MIMO detection. Instead of using the traditional tree search based structure, we represent the search space of the MIMO signals with a directed graph and a greedy algorithm is ap- plied to compute the a posteriori probability (APP) for each transmitted bit. The proposed detector has two advantages: 1) it keeps a fixed throughput and has a regular and parallel datapath structure which makes it amenable to high speed VLSI implementation, and 2) it attempts to maximize the a posteriori probability by making the locally optimum choice at each stage with the hope of finding the global minimum Euclidean distance for every transmitted bit x_k element of {-1, +1}. Compared to the soft K-best detector, the proposed solution significantly reduces the complexity because sorting is not required, while still maintaining good bit error rate (BER) performance. The proposed greedy detection algorithm has been designed and synthesized for a 4 x 4 16-QAM MIMO system in a TSMC 65 nm CMOS technology. The detector achieves a maximum throughput of 600 Mbps with a 0.79 mm2 core area.Nokia CorporationNational Science Foundatio

    On Finding a Subset of Healthy Individuals from a Large Population

    Full text link
    In this paper, we derive mutual information based upper and lower bounds on the number of nonadaptive group tests required to identify a given number of "non defective" items from a large population containing a small number of "defective" items. We show that a reduction in the number of tests is achievable compared to the approach of first identifying all the defective items and then picking the required number of non-defective items from the complement set. In the asymptotic regime with the population size NN \rightarrow \infty, to identify LL non-defective items out of a population containing KK defective items, when the tests are reliable, our results show that CsK1o(1)(Φ(α0,β0)+o(1))\frac{C_s K}{1-o(1)} (\Phi(\alpha_0, \beta_0) + o(1)) measurements are sufficient, where CsC_s is a constant independent of N,KN, K and LL, and Φ(α0,β0)\Phi(\alpha_0, \beta_0) is a bounded function of α0limNLNK\alpha_0 \triangleq \lim_{N\rightarrow \infty} \frac{L}{N-K} and β0limNKNK\beta_0 \triangleq \lim_{N\rightarrow \infty} \frac{K} {N-K}. Further, in the nonadaptive group testing setup, we obtain rigorous upper and lower bounds on the number of tests under both dilution and additive noise models. Our results are derived using a general sparse signal model, by virtue of which, they are also applicable to other important sparse signal based applications such as compressive sensing.Comment: 32 pages, 2 figures, 3 tables, revised version of a paper submitted to IEEE Trans. Inf. Theor

    Computationally Tractable Algorithms for Finding a Subset of Non-defective Items from a Large Population

    Full text link
    In the classical non-adaptive group testing setup, pools of items are tested together, and the main goal of a recovery algorithm is to identify the "complete defective set" given the outcomes of different group tests. In contrast, the main goal of a "non-defective subset recovery" algorithm is to identify a "subset" of non-defective items given the test outcomes. In this paper, we present a suite of computationally efficient and analytically tractable non-defective subset recovery algorithms. By analyzing the probability of error of the algorithms, we obtain bounds on the number of tests required for non-defective subset recovery with arbitrarily small probability of error. Our analysis accounts for the impact of both the additive noise (false positives) and dilution noise (false negatives). By comparing with the information theoretic lower bounds, we show that the upper bounds on the number of tests are order-wise tight up to a log2K\log^2K factor, where KK is the number of defective items. We also provide simulation results that compare the relative performance of the different algorithms and provide further insights into their practical utility. The proposed algorithms significantly outperform the straightforward approaches of testing items one-by-one, and of first identifying the defective set and then choosing the non-defective items from the complement set, in terms of the number of measurements required to ensure a given success rate.Comment: In this revision: Unified some proofs and reorganized the paper, corrected a small mistake in one of the proofs, added more reference
    corecore