6 research outputs found
Optimal Nested Test Plan for Combinatorial Quantitative Group Testing
We consider the quantitative group testing problem where the objective is to
identify defective items in a given population based on results of tests
performed on subsets of the population. Under the quantitative group testing
model, the result of each test reveals the number of defective items in the
tested group. The minimum number of tests achievable by nested test plans was
established by Aigner and Schughart in 1985 within a minimax framework. The
optimal nested test plan offering this performance, however, was not obtained.
In this work, we establish the optimal nested test plan in closed form. This
optimal nested test plan is also order optimal among all test plans as the
population size approaches infinity. Using heavy-hitter detection as a case
study, we show via simulation examples orders of magnitude improvement of the
group testing approach over two prevailing sampling-based approaches in
detection accuracy and counter consumption. Other applications include anomaly
detection and wideband spectrum sensing in cognitive radio systems
Computationally Tractable Algorithms for Finding a Subset of Non-defective Items from a Large Population
In the classical non-adaptive group testing setup, pools of items are tested
together, and the main goal of a recovery algorithm is to identify the
"complete defective set" given the outcomes of different group tests. In
contrast, the main goal of a "non-defective subset recovery" algorithm is to
identify a "subset" of non-defective items given the test outcomes. In this
paper, we present a suite of computationally efficient and analytically
tractable non-defective subset recovery algorithms. By analyzing the
probability of error of the algorithms, we obtain bounds on the number of tests
required for non-defective subset recovery with arbitrarily small probability
of error. Our analysis accounts for the impact of both the additive noise
(false positives) and dilution noise (false negatives). By comparing with the
information theoretic lower bounds, we show that the upper bounds on the number
of tests are order-wise tight up to a factor, where is the number
of defective items. We also provide simulation results that compare the
relative performance of the different algorithms and provide further insights
into their practical utility. The proposed algorithms significantly outperform
the straightforward approaches of testing items one-by-one, and of first
identifying the defective set and then choosing the non-defective items from
the complement set, in terms of the number of measurements required to ensure a
given success rate.Comment: In this revision: Unified some proofs and reorganized the paper,
corrected a small mistake in one of the proofs, added more reference
Group Testing-Based Spectrum Hole Search for Cognitive Radios
This paper investigates the use of adaptive group testing to find a spectrum hole of a specified bandwidth in a given wideband of interest. We propose a group testing-based spectrum hole search algorithm that exploits sparsity in the primary spectral occupancy by testing a group of adjacent subbands in a single test. This is enabled by a simple and easily implementable sub-Nyquist sampling scheme for signal acquisition by the cognitive radios (CRs). The sampling scheme deliberately introduces aliasing during signal acquisition, resulting in a signal that is the sum of signals from adjacent subbands. Energy-based hypothesis tests are used to provide an occupancy decision over the group of subbands, and this forms the basis of the proposed algorithm to find contiguous spectrum holes of a specified bandwidth. We extend this framework to a multistage sensing algorithm that can be employed in a variety of spectrum sensing scenarios, including noncontiguous spectrum hole search. Furthermore, we provide the analytical means to optimize the group tests with respect to the detection thresholds, number of samples, group size, and number of stages to minimize the detection delay under a given error probability constraint. Our analysis allows one to identify the sparsity and SNR regimes where group testing can lead to significantly lower detection delays compared with a conventional bin-by-bin energy detection scheme; the latter is, in fact, a special case of the group test when the group size is set to 1 bin. We validate our analytical results via Monte Carlo simulations
Recommended from our members
Belief Refinement Approaches to Communication and Inference Problems
This dissertation considers a problem where a single agent or a group of agents aim to estimate/learn unknown (possibly time-varying) parameters of interest despite making noisy observations. The agents take a Bayesian-like approach by maintaining a posterior probability distribution or “belief" over a parameter space conditioned on past observations. The agents aim to iteratively refine their belief over the parameter space as new information is acquired from their private observations or through collaboration with other agents. In particular, the agents aim to ensure that sufficient belief is assigned in neighborhoods centered around the true parameter with high probability or “reliability". In the context of communication problems considered in this dissertation, the agents may be active, i.e., agents may additionally take actions which provide new observations. Furthermore, agents may employ an adaptive strategy, i.e., using their past actions and the resulting observations, agents can adaptively choose actions to control the concentration of the belief. When the agents are active, we propose and analyze adaptive belief refinement approaches to obtain belief concentration on the unknown parameter with high reliability. In a different context, namely that of decentralized inference, we consider passive agents. Here, agents face an additional challenge due to the statistical insufficiency of their private observations to learn the unknown parameter. While individual agents’ observations are not informative enough, we assume that the agents’ observations are collectively informative to learn the unknown parameter. Here, we propose and analyze decentralized belief refining strategies to collaboratively obtain belief concentration on the unknown parameter. In the first part of this dissertation, we consider active strategies that are extensions of the posterior matching strategy (PM) introduced by Horstein, which is a generalization of the well-known binary search algorithm. We propose and analyze PM based strategies in the context of modern communication systems, namely the problem of establishing initial access in mm-Wave communication and spectrum sensing for Cognitive Radio. We propose and analyze channel coding strategies for real-time streaming and control applications. The second part of the dissertation investigates the belief refinement approaches for decentralized learning. In particular, it focusing on developing and analyzing a decentralized learning rule for statistical hypothesis testing and its application to decentralized machine learning