9 research outputs found
How little does non-exact recovery help in group testing?
We consider the group testing problem, in which one seeks to identify a subset of defective items within a larger set of items based on a number of tests. We characterize the information-theoretic performance limits in the presence of list decoding, in which the decoder may output a list containing more elements than the number of defectives, and the only requirement is that the true defective set is a subset of the list, or more generally, that their overlap exceeds a given threshold. We show that even under this highly relaxed criterion, in several scaling regimes the asymptotic number of tests is no smaller than the exact recovery setting. However, we also provide examples where a reduction is provably attained. We support our theoretical findings with numerical experiments
Noisy Non-Adaptive Group Testing: A (Near-)Definite Defectives Approach
The group testing problem consists of determining a small set of defective
items from a larger set of items based on a number of possibly-noisy tests, and
is relevant in applications such as medical testing, communication protocols,
pattern matching, and many more. We study the noisy version of the problem,
where the output of each standard noiseless group test is subject to
independent noise, corresponding to passing the noiseless result through a
binary channel. We introduce a class of algorithms that we refer to as
Near-Definite Defectives (NDD), and study bounds on the required number of
tests for vanishing error probability under Bernoulli random test designs. In
addition, we study algorithm-independent converse results, giving lower bounds
on the required number of tests under Bernoulli test designs. Under reverse
-channel noise, the achievable rates and converse results match in a broad
range of sparsity regimes, and under -channel noise, the two match in a
narrower range of dense/low-noise regimes. We observe that although these two
channels have the same Shannon capacity when viewed as a communication channel,
they can behave quite differently when it comes to group testing. Finally, we
extend our analysis of these noise models to the symmetric noise model, and
show improvements over the best known existing bounds in broad scaling regimes.Comment: Submitted to IEEE Transactions on Information Theor
Improved bounds for noisy group testing with constant tests per item
The group testing problem is concerned with identifying a small set of
infected individuals in a large population. At our disposal is a testing
procedure that allows us to test several individuals together. In an idealized
setting, a test is positive if and only if at least one infected individual is
included and negative otherwise. Significant progress was made in recent years
towards understanding the information-theoretic and algorithmic properties in
this noiseless setting. In this paper, we consider a noisy variant of group
testing where test results are flipped with certain probability, including the
realistic scenario where sensitivity and specificity can take arbitrary values.
Using a test design where each individual is assigned to a fixed number of
tests, we derive explicit algorithmic bounds for two commonly considered
inference algorithms and thereby naturally extend the results of Scarlett \&
Cevher (2016) and Scarlett \& Johnson (2020). We provide improved performance
guarantees for the efficient algorithms in these noisy group testing models --
indeed, for a large set of parameter choices the bounds provided in the paper
are the strongest currently proved
Improved bounds for noisy group testing with constant tests per item
The group testing problem is concerned with identifying a small set of
infected individuals in a large population. At our disposal is a testing
procedure that allows us to test several individuals together. In an idealized
setting, a test is positive if and only if at least one infected individual is
included and negative otherwise. Significant progress was made in recent years
towards understanding the information-theoretic and algorithmic properties in
this noiseless setting. In this paper, we consider a noisy variant of group
testing where test results are flipped with certain probability, including the
realistic scenario where sensitivity and specificity can take arbitrary values.
Using a test design where each individual is assigned to a fixed number of
tests, we derive explicit algorithmic bounds for two commonly considered
inference algorithms and thereby naturally extend the results of Scarlett \&
Cevher (2016) and Scarlett \& Johnson (2020). We provide improved performance
guarantees for the efficient algorithms in these noisy group testing models --
indeed, for a large set of parameter choices the bounds provided in the paper
are the strongest currently proved
Limits on Support Recovery with Probabilistic Models: An Information-Theoretic Framework
The support recovery problem consists of determining a sparse subset of a set
of variables that is relevant in generating a set of observations, and arises
in a diverse range of settings such as compressive sensing, and subset
selection in regression, and group testing. In this paper, we take a unified
approach to support recovery problems, considering general probabilistic models
relating a sparse data vector to an observation vector. We study the
information-theoretic limits of both exact and partial support recovery, taking
a novel approach motivated by thresholding techniques in channel coding. We
provide general achievability and converse bounds characterizing the trade-off
between the error probability and number of measurements, and we specialize
these to the linear, 1-bit, and group testing models. In several cases, our
bounds not only provide matching scaling laws in the necessary and sufficient
number of measurements, but also sharp thresholds with matching constant
factors. Our approach has several advantages over previous approaches: For the
achievability part, we obtain sharp thresholds under broader scalings of the
sparsity level and other parameters (e.g., signal-to-noise ratio) compared to
several previous works, and for the converse part, we not only provide
conditions under which the error probability fails to vanish, but also
conditions under which it tends to one.Comment: Accepted to IEEE Transactions on Information Theory; presented in
part at ISIT 2015 and SODA 201
Techniques for Decentralized and Dynamic Resource Allocation
abstract: This thesis investigates three different resource allocation problems, aiming to achieve two common goals: i) adaptivity to a fast-changing environment, ii) distribution of the computation tasks to achieve a favorable solution. The motivation for this work relies on the modern-era proliferation of sensors and devices, in the Data Acquisition Systems (DAS) layer of the Internet of Things (IoT) architecture. To avoid congestion and enable low-latency services, limits have to be imposed on the amount of decisions that can be centralized (i.e. solved in the ``cloud") and/or amount of control information that devices can exchange. This has been the motivation to develop i) a lightweight PHY Layer protocol for time synchronization and scheduling in Wireless Sensor Networks (WSNs), ii) an adaptive receiver that enables Sub-Nyquist sampling, for efficient spectrum sensing at high frequencies, and iii) an SDN-scheme for resource-sharing across different technologies and operators, to harmoniously and holistically respond to fluctuations in demands at the eNodeB' s layer.
The proposed solution for time synchronization and scheduling is a new protocol, called PulseSS, which is completely event-driven and is inspired by biological networks. The results on convergence and accuracy for locally connected networks, presented in this thesis, constitute the theoretical foundation for the protocol in terms of performance guarantee. The derived limits provided guidelines for ad-hoc solutions in the actual implementation of the protocol.
The proposed receiver for Compressive Spectrum Sensing (CSS) aims at tackling the noise folding phenomenon, e.g., the accumulation of noise from different sub-bands that are folded, prior to sampling and baseband processing, when an analog front-end aliasing mixer is utilized.
The sensing phase design has been conducted via a utility maximization approach, thus the scheme derived has been called Cognitive Utility Maximization Multiple Access (CUMMA).
The framework described in the last part of the thesis is inspired by stochastic network optimization tools and dynamics.
While convergence of the proposed approach remains an open problem, the numerical results here presented suggest the capability of the algorithm to handle traffic fluctuations across operators, while respecting different time and economic constraints.
The scheme has been named Decomposition of Infrastructure-based Dynamic Resource Allocation (DIDRA).Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201