6 research outputs found
Derandomization and Group Testing
The rapid development of derandomization theory, which is a fundamental area
in theoretical computer science, has recently led to many surprising
applications outside its initial intention. We will review some recent such
developments related to combinatorial group testing. In its most basic setting,
the aim of group testing is to identify a set of "positive" individuals in a
population of items by taking groups of items and asking whether there is a
positive in each group.
In particular, we will discuss explicit constructions of optimal or
nearly-optimal group testing schemes using "randomness-conducting" functions.
Among such developments are constructions of error-correcting group testing
schemes using randomness extractors and condensers, as well as threshold group
testing schemes from lossless condensers.Comment: Invited Paper in Proceedings of 48th Annual Allerton Conference on
Communication, Control, and Computing, 201
Recommended from our members
The Unified Theory of Pseudorandomness
Pseudorandomness is the theory of efficiently generating objects that look "random" despite being constructed with little or no randomness. One of the achievements of this research area has been the realization that a number of fundamental and widely studied "pseudorandom" objects are all almost equivalent when viewed appropriately. These objects include pseudorandom generators, expander graphs, list-decodable error-correcting codes, averaging samplers, and hardness amplifi ers. In this survey, we describe the connections between all of these objects, showing how they can all be cast within a single "list-decoding framework" that brings out both their similarities and differences.Engineering and Applied Science
Computing the Volume of a Restricted Independent Set Polytope Deterministically
We construct a quasi-polynomial time deterministic approximation algorithm
for computing the volume of an independent set polytope with restrictions.
Randomized polynomial time approximation algorithms for computing the volume of
a convex body have been known now for several decades, but the corresponding
deterministic counterparts are not available, and our algorithm is the first of
this kind. The class of polytopes for which our algorithm applies arises as
linear programming relaxation of the independent set problem with the
additional restriction that each variable takes value in the interval
for some . (We note that the case is
trivial).
We use the correlation decay method for this problem applied to its
appropriate and natural discretization. The method works provided , where is the maximum degree of the graph. When
(the sparsest non-trivial case), our method works provided
. Interestingly, the interpolation method, which is based on
analyzing complex roots of the associated partition functions, fails even in
the trivial case when the underlying graph is a singleton
Pseudodeterministic constructions in subexponential time
We study pseudodeterministic constructions, i.e., randomized algorithms which output the same solution on most computation paths. We establish unconditionally that there is an infinite sequence {pn}n∈N of increasing primes and a randomized algorithm A running in expected sub-exponential time such that for each n, on input 1|pn|, A outputs pn with probability 1. In other words, our result provides a pseudodeterministic construction of primes in sub-exponential time which works infinitely often.
This result follows from a much more general theorem about pseudodeterministic constructions. A property Q ⊆ {0, 1}* is γ-dense if for large enough n, |Q ⋂ {0, 1}n| ≥ γ2n. We show that for each c > 0 at least one of the following holds: (1) There is a pseudodeterministic polynomial time construction of a family {Hn} of sets, Hn ⊆ {0, 1}n, such that for each (1=nc)-dense property Q ∈ DTIME(n^c) and every large enough n, Hn ⋂ Q ≠∅; or (2) There is a deterministic sub-exponential time construction of a family {H'n} of sets, H'n ⊆ {0, 1}n, such that for each (1/n^c)-dense property Q ∈ DTIME(n^c) and for infinitely many values of n, H'n ⋂ Q ≠∅.
We provide further algorithmic applications that might be of independent interest. Perhaps intriguingly, while our main results are unconditional, they have a non-constructive element, arising from a sequence of applications of the hardness versus randomness paradigm.</p