6 research outputs found

    Derandomization and Group Testing

    Full text link
    The rapid development of derandomization theory, which is a fundamental area in theoretical computer science, has recently led to many surprising applications outside its initial intention. We will review some recent such developments related to combinatorial group testing. In its most basic setting, the aim of group testing is to identify a set of "positive" individuals in a population of items by taking groups of items and asking whether there is a positive in each group. In particular, we will discuss explicit constructions of optimal or nearly-optimal group testing schemes using "randomness-conducting" functions. Among such developments are constructions of error-correcting group testing schemes using randomness extractors and condensers, as well as threshold group testing schemes from lossless condensers.Comment: Invited Paper in Proceedings of 48th Annual Allerton Conference on Communication, Control, and Computing, 201

    The Complexity of Explicit Constructions

    Get PDF

    Computing the Volume of a Restricted Independent Set Polytope Deterministically

    Full text link
    We construct a quasi-polynomial time deterministic approximation algorithm for computing the volume of an independent set polytope with restrictions. Randomized polynomial time approximation algorithms for computing the volume of a convex body have been known now for several decades, but the corresponding deterministic counterparts are not available, and our algorithm is the first of this kind. The class of polytopes for which our algorithm applies arises as linear programming relaxation of the independent set problem with the additional restriction that each variable takes value in the interval [0,1−α][0,1-\alpha] for some α<1/2\alpha<1/2. (We note that the α≥1/2\alpha\ge 1/2 case is trivial). We use the correlation decay method for this problem applied to its appropriate and natural discretization. The method works provided α>1/2−O(1/Δ2)\alpha> 1/2-O(1/\Delta^2), where Δ\Delta is the maximum degree of the graph. When Δ=3\Delta=3 (the sparsest non-trivial case), our method works provided 0.488<α<0.50.488<\alpha<0.5. Interestingly, the interpolation method, which is based on analyzing complex roots of the associated partition functions, fails even in the trivial case when the underlying graph is a singleton

    Pseudodeterministic constructions in subexponential time

    Get PDF
    We study pseudodeterministic constructions, i.e., randomized algorithms which output the same solution on most computation paths. We establish unconditionally that there is an infinite sequence {pn}n∈N of increasing primes and a randomized algorithm A running in expected sub-exponential time such that for each n, on input 1|pn|, A outputs pn with probability 1. In other words, our result provides a pseudodeterministic construction of primes in sub-exponential time which works infinitely often. This result follows from a much more general theorem about pseudodeterministic constructions. A property Q ⊆ {0, 1}* is γ-dense if for large enough n, |Q ⋂ {0, 1}n| ≥ γ2n. We show that for each c > 0 at least one of the following holds: (1) There is a pseudodeterministic polynomial time construction of a family {Hn} of sets, Hn ⊆ {0, 1}n, such that for each (1=nc)-dense property Q ∈ DTIME(n^c) and every large enough n, Hn ⋂ Q ≠ ∅; or (2) There is a deterministic sub-exponential time construction of a family {H'n} of sets, H'n ⊆ {0, 1}n, such that for each (1/n^c)-dense property Q ∈ DTIME(n^c) and for infinitely many values of n, H'n ⋂ Q ≠ ∅. We provide further algorithmic applications that might be of independent interest. Perhaps intriguingly, while our main results are unconditional, they have a non-constructive element, arising from a sequence of applications of the hardness versus randomness paradigm.</p
    corecore