276 research outputs found
Engineering Competitive and Query-Optimal Minimal-Adaptive Randomized Group Testing Strategies
Suppose that given is a collection of elements where of them are \emph{defective}. We can query an arbitrarily chosen subset of elements which returns Yes if the subset contains at least one defective and No if the subset is free of defectives. The problem of group testing is to identify the defectives with a minimum number of such queries. By the information-theoretic lower bound at least queries are needed. Using adaptive group testing, i.e., asking one query at a time, the lower bound can be easily achieved. However, strategies are preferred that work in a fixed small number of stages, where queries in a stage are asked in parallel. A group testing strategy is called \emph{competitive} if it works for completely unknown and requires only queries. Usually competitive group testing is based on sequential queries. We have shown that actually competitive group testing with expected queries is possible in only or stages. Then we have focused on minimizing the hidden constant factor in the query number and proposed a systematic approach for this purpose. Another main result is related to the design of query-optimal and minimal-adaptive strategies. We have shown that a -stage randomized strategy with prescribed success probability can asymptotically achieve the information-theoretic lower bound for and growing much slower than . Similarly, we can approach the entropy lower bound in stages when
Graph Connectivity and Single Element Recovery via Linear and OR Queries
We study the problem of finding a spanning forest in an undirected,
-vertex multi-graph under two basic query models. One is the Linear query
model which are linear measurements on the incidence vector induced by the
edges; the other is the weaker OR query model which only reveals whether a
given subset of plausible edges is empty or not. At the heart of our study lies
a fundamental problem which we call the {\em single element recovery} problem:
given a non-negative real vector in dimension, return a single element
from the support. Queries can be made in rounds, and our goals is to
understand the trade-offs between the query complexity and the rounds of
adaptivity needed to solve these problems, for both deterministic and
randomized algorithms. These questions have connections and ramifications to
multiple areas such as sketching, streaming, graph reconstruction, and
compressed sensing. Our main results are:
* For the single element recovery problem, it is easy to obtain a
deterministic, -round algorithm which makes -queries per-round.
We prove that this is tight: any -round deterministic algorithm must make
linear queries in some round. In contrast, a -round
-query randomized algorithm which succeeds 99% of the time is
known to exist.
* We design a deterministic -round, -OR query
algorithm for graph connectivity. We complement this with an
-lower bound for any -round deterministic
algorithm in the OR-model.
* We design a randomized, -round algorithm for the graph connectivity
problem which makes -OR queries. In contrast, we prove that any
-round algorithm (possibly randomized) requires -OR
queries
Estimation of Sparsity via Simple Measurements
We consider several related problems of estimating the 'sparsity' or number
of nonzero elements in a length vector by observing only
, where is a predesigned test matrix
independent of , and the operation varies between problems.
We aim to provide a -approximation of sparsity for some constant
with a minimal number of measurements (rows of ). This framework
generalizes multiple problems, such as estimation of sparsity in group testing
and compressed sensing. We use techniques from coding theory as well as
probabilistic methods to show that rows are sufficient
when the operation is logical OR (i.e., group testing), and nearly this
many are necessary, where is a known upper bound on . When instead the
operation is multiplication over or a finite field
, we show that respectively and measurements are necessary and sufficient.Comment: 13 pages; shortened version presented at ISIT 201
Optimal Randomized Group Testing Algorithm to Determine the Number of Defectives
We study the problem of determining the exact number of defective items in an adaptive group testing by using a minimum number of tests. We improve the existing algorithm and prove a lower bound that shows that the number of tests in our algorithm is optimal up to small additive terms
Consistency-Checking Problems: A Gateway to Parameterized Sample Complexity
Recently, Brand, Ganian and Simonov introduced a parameterized refinement of
the classical PAC-learning sample complexity framework. A crucial outcome of
their investigation is that for a very wide range of learning problems, there
is a direct and provable correspondence between fixed-parameter
PAC-learnability (in the sample complexity setting) and the fixed-parameter
tractability of a corresponding "consistency checking" search problem (in the
setting of computational complexity). The latter can be seen as generalizations
of classical search problems where instead of receiving a single instance, one
receives multiple yes- and no-examples and is tasked with finding a solution
which is consistent with the provided examples.
Apart from a few initial results, consistency checking problems are almost
entirely unexplored from a parameterized complexity perspective. In this
article, we provide an overview of these problems and their connection to
parameterized sample complexity, with the primary aim of facilitating further
research in this direction. Afterwards, we establish the fixed-parameter
(in)-tractability for some of the arguably most natural consistency checking
problems on graphs, and show that their complexity-theoretic behavior is
surprisingly very different from that of classical decision problems. Our new
results cover consistency checking variants of problems as diverse as (k-)Path,
Matching, 2-Coloring, Independent Set and Dominating Set, among others
- β¦