3 research outputs found

    On Finding a Subset of Healthy Individuals from a Large Population

    Full text link
    In this paper, we derive mutual information based upper and lower bounds on the number of nonadaptive group tests required to identify a given number of "non defective" items from a large population containing a small number of "defective" items. We show that a reduction in the number of tests is achievable compared to the approach of first identifying all the defective items and then picking the required number of non-defective items from the complement set. In the asymptotic regime with the population size NN \rightarrow \infty, to identify LL non-defective items out of a population containing KK defective items, when the tests are reliable, our results show that CsK1o(1)(Φ(α0,β0)+o(1))\frac{C_s K}{1-o(1)} (\Phi(\alpha_0, \beta_0) + o(1)) measurements are sufficient, where CsC_s is a constant independent of N,KN, K and LL, and Φ(α0,β0)\Phi(\alpha_0, \beta_0) is a bounded function of α0limNLNK\alpha_0 \triangleq \lim_{N\rightarrow \infty} \frac{L}{N-K} and β0limNKNK\beta_0 \triangleq \lim_{N\rightarrow \infty} \frac{K} {N-K}. Further, in the nonadaptive group testing setup, we obtain rigorous upper and lower bounds on the number of tests under both dilution and additive noise models. Our results are derived using a general sparse signal model, by virtue of which, they are also applicable to other important sparse signal based applications such as compressive sensing.Comment: 32 pages, 2 figures, 3 tables, revised version of a paper submitted to IEEE Trans. Inf. Theor

    Techniques for Decentralized and Dynamic Resource Allocation

    Get PDF
    abstract: This thesis investigates three different resource allocation problems, aiming to achieve two common goals: i) adaptivity to a fast-changing environment, ii) distribution of the computation tasks to achieve a favorable solution. The motivation for this work relies on the modern-era proliferation of sensors and devices, in the Data Acquisition Systems (DAS) layer of the Internet of Things (IoT) architecture. To avoid congestion and enable low-latency services, limits have to be imposed on the amount of decisions that can be centralized (i.e. solved in the ``cloud") and/or amount of control information that devices can exchange. This has been the motivation to develop i) a lightweight PHY Layer protocol for time synchronization and scheduling in Wireless Sensor Networks (WSNs), ii) an adaptive receiver that enables Sub-Nyquist sampling, for efficient spectrum sensing at high frequencies, and iii) an SDN-scheme for resource-sharing across different technologies and operators, to harmoniously and holistically respond to fluctuations in demands at the eNodeB' s layer. The proposed solution for time synchronization and scheduling is a new protocol, called PulseSS, which is completely event-driven and is inspired by biological networks. The results on convergence and accuracy for locally connected networks, presented in this thesis, constitute the theoretical foundation for the protocol in terms of performance guarantee. The derived limits provided guidelines for ad-hoc solutions in the actual implementation of the protocol. The proposed receiver for Compressive Spectrum Sensing (CSS) aims at tackling the noise folding phenomenon, e.g., the accumulation of noise from different sub-bands that are folded, prior to sampling and baseband processing, when an analog front-end aliasing mixer is utilized. The sensing phase design has been conducted via a utility maximization approach, thus the scheme derived has been called Cognitive Utility Maximization Multiple Access (CUMMA). The framework described in the last part of the thesis is inspired by stochastic network optimization tools and dynamics. While convergence of the proposed approach remains an open problem, the numerical results here presented suggest the capability of the algorithm to handle traffic fluctuations across operators, while respecting different time and economic constraints. The scheme has been named Decomposition of Infrastructure-based Dynamic Resource Allocation (DIDRA).Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    ON FINDING A SUBSET OF NON-DEFECTIVE ITEMS FROM A LARGE POPULATION USING GROUP TESTS: RECOVERY ALGORITHMS AND BOUNDS

    No full text
    We present computationally efficient and analytically tractable algorithms for identifying a given number of ``non-defective'' items from a large population containing a small number of ``defective'' items under a noisy Non-adaptive Group Testing (NGT) framework. In contrast to the classical NGT, where the main goal is to identify the complete set of defective items, the main goal of a non-defective subset recovery algorithm is to identify a subset of non-defective items given the test outcomes. In this paper, we present three algorithms and corresponding bounds on the number of tests required for successful non-defective subset recovery. We consider a random, non-adaptive pooling strategy with noisy test outcomes, where we account for the impact of both additive noise (false positives) and dilution noise (false negatives). We provide simulation results to highlight the relative performance of the algorithms, and to demonstrate the Significant improvement they offer over existing approaches, in terms of the number of tests required for a given success rate
    corecore