12 research outputs found
A PCP Characterization of AM
We introduce a 2-round stochastic constraint-satisfaction problem, and show
that its approximation version is complete for (the promise version of) the
complexity class AM. This gives a `PCP characterization' of AM analogous to the
PCP Theorem for NP. Similar characterizations have been given for higher levels
of the Polynomial Hierarchy, and for PSPACE; however, we suggest that the
result for AM might be of particular significance for attempts to derandomize
this class.
To test this notion, we pose some `Randomized Optimization Hypotheses'
related to our stochastic CSPs that (in light of our result) would imply
collapse results for AM. Unfortunately, the hypotheses appear over-strong, and
we present evidence against them. In the process we show that, if some language
in NP is hard-on-average against circuits of size 2^{Omega(n)}, then there
exist hard-on-average optimization problems of a particularly elegant form.
All our proofs use a powerful form of PCPs known as Probabilistically
Checkable Proofs of Proximity, and demonstrate their versatility. We also use
known results on randomness-efficient soundness- and hardness-amplification. In
particular, we make essential use of the Impagliazzo-Wigderson generator; our
analysis relies on a recent Chernoff-type theorem for expander walks.Comment: 18 page
Applications of Derandomization Theory in Coding
Randomized techniques play a fundamental role in theoretical computer science
and discrete mathematics, in particular for the design of efficient algorithms
and construction of combinatorial objects. The basic goal in derandomization
theory is to eliminate or reduce the need for randomness in such randomized
constructions. In this thesis, we explore some applications of the fundamental
notions in derandomization theory to problems outside the core of theoretical
computer science, and in particular, certain problems related to coding theory.
First, we consider the wiretap channel problem which involves a communication
system in which an intruder can eavesdrop a limited portion of the
transmissions, and construct efficient and information-theoretically optimal
communication protocols for this model. Then we consider the combinatorial
group testing problem. In this classical problem, one aims to determine a set
of defective items within a large population by asking a number of queries,
where each query reveals whether a defective item is present within a specified
group of items. We use randomness condensers to explicitly construct optimal,
or nearly optimal, group testing schemes for a setting where the query outcomes
can be highly unreliable, as well as the threshold model where a query returns
positive if the number of defectives pass a certain threshold. Finally, we
design ensembles of error-correcting codes that achieve the
information-theoretic capacity of a large class of communication channels, and
then use the obtained ensembles for construction of explicit capacity achieving
codes.
[This is a shortened version of the actual abstract in the thesis.]Comment: EPFL Phd Thesi
Covering Problems via Structural Approaches
The minimum set cover problem is, without question, among the most ubiquitous and well-studied problems in computer science. Its theoretical hardness has been fully characterized--logarithmic approximability has been established, and no sublogarithmic approximation exists unless P=NP. However, the gap between real-world instances and the theoretical worst case is often immense--many covering problems of practical relevance admit much better approximations, or even solvability in polynomial time. Simple combinatorial or geometric structure can often be exploited to obtain improved algorithms on a problem-by-problem basis, but there is no general method of determining the extent to which this is possible.
In this thesis, we aim to shed light on the relationship between the structure and the hardness of covering problems. We discuss several measures of structural complexity of set cover instances and prove new algorithmic and hardness results linking the approximability of a set cover problem to its underlying structure. In particular, we provide:
- An APX-hardness proof for a wide family of problems that encode a simple covering problem known as Special-3SC.
- A class of polynomial dynamic programming algorithms for a group of weighted geometric set cover problems having simple structure.
- A simplified quasi-uniform sampling algorithm that yields improved approximations for weighted covering problems having low cell complexity or geometric union complexity.
- Applications of the above to various capacitated covering problems via linear programming strengthening and rounding.
In total, we obtain new results for dozens of covering problems exhibiting geometric or combinatorial structure. We tabulate these problems and classify them according to their approximability
Recommended from our members
Explicit two-source extractors and more
In this thesis we study the problem of extracting almost truly random bits from imperfect sources of randomness. This is motivated by the wide use of randomness in computer science, and the fact that most accessible sources of randomness generate correlated bits, and at best contain some amount of entropy. We follow Chor and Goldreich [CG88] and Zuckerman [Z90], and model weak sources using min-entropy, where an (n,k)-source X is a distribution on n bits and takes any string x with probability at most 2^-k. It is known that it is impossible to extract random bits from a single (n,k)-source, and Chor and Goldreich [CG88] raised the question of extracting randomness from two such independent (n,k)-sources. Existentially, such 2-source randomness extractors exist for min-entropy k >=log n + O(1), but the best known construction prior to work in this thesis requires min-entropy k >=0.499 n [B2]. One of the main contributions of this thesis is an explicit 2-source extractor for min-entropy log^C n, for some constant C. Other results in this thesis include improved ways of extracting random bits from various other sources of randomness, as well as stronger notions of randomness extraction. Our results have applications in privacy amplification [BBR88,Mau92,BBCM95], which is a classical problem in information cryptography, and give protocols that achieve almost optimal parameters. Other applications include explicit constructions of non-malleable codes, which is a relaxation of the notion of error-detection codes and have applications in tamper-resilient cryptography [DPW10].Computer Science
On the Complexity of Approximating the VC dimension
We study the complexity of approximating the VC dimension of a collection of sets, when the sets are encoded succinctly by a small circuit. We show that this problem is: Σ_3^p-hard to approximate to within a factor 2-ε for any ε>0; approximable in AM to within a factor 2; and AM-hard to approximate to within a factor N_ε for some constant ε>0. To obtain the Σ_3^p-hardness results we solve a randomness extraction problem using list-decodable binary codes; for the positive results we utilize the Sauer-Shelah(-Perles) Lemma. The exact value of ε in the AM-hardness result depends on the degree achievable by explicit disperser constructions
On the Complexity of Approximating the VC dimension
We study the complexity of approximating the VC dimension of a collection of sets, when the sets are encoded succinctly by a small circuit. We show that this problem is: Σ_3^p-hard to approximate to within a factor 2-ε for any ε>0; approximable in AM to within a factor 2; and AM-hard to approximate to within a factor N_ε for some constant ε>0. To obtain the Σ_3^p-hardness results we solve a randomness extraction problem using list-decodable binary codes; for the positive results we utilize the Sauer-Shelah(-Perles) Lemma. The exact value of ε in the AM-hardness result depends on the degree achievable by explicit disperser constructions