276 research outputs found

    Engineering Competitive and Query-Optimal Minimal-Adaptive Randomized Group Testing Strategies

    Get PDF
    Suppose that given is a collection of nn elements where dd of them are \emph{defective}. We can query an arbitrarily chosen subset of elements which returns Yes if the subset contains at least one defective and No if the subset is free of defectives. The problem of group testing is to identify the defectives with a minimum number of such queries. By the information-theoretic lower bound at least log⁑2(nd)β‰ˆdlog⁑2(nd)β‰ˆdlog⁑2n\log_2 \binom {n}{d} \approx d\log_2 (\frac{n}{d}) \approx d\log_2 n queries are needed. Using adaptive group testing, i.e., asking one query at a time, the lower bound can be easily achieved. However, strategies are preferred that work in a fixed small number of stages, where queries in a stage are asked in parallel. A group testing strategy is called \emph{competitive} if it works for completely unknown dd and requires only O(dlog⁑2n)O(d\log_2 n) queries. Usually competitive group testing is based on sequential queries. We have shown that actually competitive group testing with expected O(dlog⁑2n)O(d\log_2 n) queries is possible in only 22 or 33 stages. Then we have focused on minimizing the hidden constant factor in the query number and proposed a systematic approach for this purpose. Another main result is related to the design of query-optimal and minimal-adaptive strategies. We have shown that a 22-stage randomized strategy with prescribed success probability can asymptotically achieve the information-theoretic lower bound for dβ‰ͺnd \ll n and growing much slower than nn. Similarly, we can approach the entropy lower bound in 44 stages when d=o(n)d=o(n)

    Graph Connectivity and Single Element Recovery via Linear and OR Queries

    Get PDF
    We study the problem of finding a spanning forest in an undirected, nn-vertex multi-graph under two basic query models. One is the Linear query model which are linear measurements on the incidence vector induced by the edges; the other is the weaker OR query model which only reveals whether a given subset of plausible edges is empty or not. At the heart of our study lies a fundamental problem which we call the {\em single element recovery} problem: given a non-negative real vector xx in NN dimension, return a single element xj>0x_j > 0 from the support. Queries can be made in rounds, and our goals is to understand the trade-offs between the query complexity and the rounds of adaptivity needed to solve these problems, for both deterministic and randomized algorithms. These questions have connections and ramifications to multiple areas such as sketching, streaming, graph reconstruction, and compressed sensing. Our main results are: * For the single element recovery problem, it is easy to obtain a deterministic, rr-round algorithm which makes (N1/rβˆ’1)(N^{1/r}-1)-queries per-round. We prove that this is tight: any rr-round deterministic algorithm must make β‰₯(N1/rβˆ’1)\geq (N^{1/r} - 1) linear queries in some round. In contrast, a 11-round O(log⁑2N)O(\log^2 N)-query randomized algorithm which succeeds 99% of the time is known to exist. * We design a deterministic O(r)O(r)-round, O~(n1+1/r)\tilde{O}(n^{1+1/r})-OR query algorithm for graph connectivity. We complement this with an Ξ©~(n1+1/r)\tilde{\Omega}(n^{1 + 1/r})-lower bound for any rr-round deterministic algorithm in the OR-model. * We design a randomized, 22-round algorithm for the graph connectivity problem which makes O~(n)\tilde{O}(n)-OR queries. In contrast, we prove that any 11-round algorithm (possibly randomized) requires Ξ©~(n2)\tilde{\Omega}(n^2)-OR queries

    Estimation of Sparsity via Simple Measurements

    Full text link
    We consider several related problems of estimating the 'sparsity' or number of nonzero elements dd in a length nn vector x\mathbf{x} by observing only b=MβŠ™x\mathbf{b} = M \odot \mathbf{x}, where MM is a predesigned test matrix independent of x\mathbf{x}, and the operation βŠ™\odot varies between problems. We aim to provide a Ξ”\Delta-approximation of sparsity for some constant Ξ”\Delta with a minimal number of measurements (rows of MM). This framework generalizes multiple problems, such as estimation of sparsity in group testing and compressed sensing. We use techniques from coding theory as well as probabilistic methods to show that O(Dlog⁑Dlog⁑n)O(D \log D \log n) rows are sufficient when the operation βŠ™\odot is logical OR (i.e., group testing), and nearly this many are necessary, where DD is a known upper bound on dd. When instead the operation βŠ™\odot is multiplication over R\mathbb{R} or a finite field Fq\mathbb{F}_q, we show that respectively Θ(D)\Theta(D) and Θ(Dlog⁑qnD)\Theta(D \log_q \frac{n}{D}) measurements are necessary and sufficient.Comment: 13 pages; shortened version presented at ISIT 201

    Optimal Randomized Group Testing Algorithm to Determine the Number of Defectives

    Get PDF
    We study the problem of determining the exact number of defective items in an adaptive group testing by using a minimum number of tests. We improve the existing algorithm and prove a lower bound that shows that the number of tests in our algorithm is optimal up to small additive terms

    Consistency-Checking Problems: A Gateway to Parameterized Sample Complexity

    Full text link
    Recently, Brand, Ganian and Simonov introduced a parameterized refinement of the classical PAC-learning sample complexity framework. A crucial outcome of their investigation is that for a very wide range of learning problems, there is a direct and provable correspondence between fixed-parameter PAC-learnability (in the sample complexity setting) and the fixed-parameter tractability of a corresponding "consistency checking" search problem (in the setting of computational complexity). The latter can be seen as generalizations of classical search problems where instead of receiving a single instance, one receives multiple yes- and no-examples and is tasked with finding a solution which is consistent with the provided examples. Apart from a few initial results, consistency checking problems are almost entirely unexplored from a parameterized complexity perspective. In this article, we provide an overview of these problems and their connection to parameterized sample complexity, with the primary aim of facilitating further research in this direction. Afterwards, we establish the fixed-parameter (in)-tractability for some of the arguably most natural consistency checking problems on graphs, and show that their complexity-theoretic behavior is surprisingly very different from that of classical decision problems. Our new results cover consistency checking variants of problems as diverse as (k-)Path, Matching, 2-Coloring, Independent Set and Dominating Set, among others
    • …
    corecore