323 research outputs found
Derandomized Graph Product Results using the Low Degree Long Code
In this paper, we address the question of whether the recent derandomization
results obtained by the use of the low-degree long code can be extended to
other product settings. We consider two settings: (1) the graph product results
of Alon, Dinur, Friedgut and Sudakov [GAFA, 2004] and (2) the "majority is
stablest" type of result obtained by Dinur, Mossel and Regev [SICOMP, 2009] and
Dinur and Shinkar [In Proc. APPROX, 2010] while studying the hardness of
approximate graph coloring.
In our first result, we show that there exists a considerably smaller
subgraph of which exhibits the following property (shown for
by Alon et al.): independent sets close in size to the
maximum independent set are well approximated by dictators.
The "majority is stablest" type of result of Dinur et al. and Dinur and
Shinkar shows that if there exist two sets of vertices and in
with very few edges with one endpoint in and another in
, then it must be the case that the two sets and share a single
influential coordinate. In our second result, we show that a similar "majority
is stablest" statement holds good for a considerably smaller subgraph of
. Furthermore using this result, we give a more efficient
reduction from Unique Games to the graph coloring problem, leading to improved
hardness of approximation results for coloring
A No-Go Theorem for Derandomized Parallel Repetition: Beyond Feige-Kilian
In this work we show a barrier towards proving a randomness-efficient
parallel repetition, a promising avenue for achieving many tight
inapproximability results. Feige and Kilian (STOC'95) proved an impossibility
result for randomness-efficient parallel repetition for two prover games with
small degree, i.e., when each prover has only few possibilities for the
question of the other prover. In recent years, there have been indications that
randomness-efficient parallel repetition (also called derandomized parallel
repetition) might be possible for games with large degree, circumventing the
impossibility result of Feige and Kilian. In particular, Dinur and Meir
(CCC'11) construct games with large degree whose repetition can be derandomized
using a theorem of Impagliazzo, Kabanets and Wigderson (SICOMP'12). However,
obtaining derandomized parallel repetition theorems that would yield optimal
inapproximability results has remained elusive.
This paper presents an explanation for the current impasse in progress, by
proving a limitation on derandomized parallel repetition. We formalize two
properties which we call "fortification-friendliness" and "yields robust
embeddings." We show that any proof of derandomized parallel repetition
achieving almost-linear blow-up cannot both (a) be fortification-friendly and
(b) yield robust embeddings. Unlike Feige and Kilian, we do not require the
small degree assumption.
Given that virtually all existing proofs of parallel repetition, including
the derandomized parallel repetition result of Dinur and Meir, share these two
properties, our no-go theorem highlights a major barrier to achieving
almost-linear derandomized parallel repetition
Sparser Johnson-Lindenstrauss Transforms
We give two different and simple constructions for dimensionality reduction
in via linear mappings that are sparse: only an
-fraction of entries in each column of our embedding matrices
are non-zero to achieve distortion with high probability, while
still achieving the asymptotically optimal number of rows. These are the first
constructions to provide subconstant sparsity for all values of parameters,
improving upon previous works of Achlioptas (JCSS 2003) and Dasgupta, Kumar,
and Sarl\'{o}s (STOC 2010). Such distributions can be used to speed up
applications where dimensionality reduction is used.Comment: v6: journal version, minor changes, added Remark 23; v5: modified
abstract, fixed typos, added open problem section; v4: simplified section 4
by giving 1 analysis that covers both constructions; v3: proof of Theorem 25
in v2 was written incorrectly, now fixed; v2: Added another construction
achieving same upper bound, and added proof of near-tight lower bound for DKS
schem
Parallel Repetition From Fortification
The Parallel Repetition Theorem upper-bounds the value of a repeated (tensored) two prover game in terms of the value of the base game and the number of repetitions. In this work we give a simple transformation on games – “fortification” – and show that for fortified games, the value of the repeated game decreases perfectly exponentially with the number of repetitions, up to an arbitrarily small additive error. Our proof is combinatorial and short. As corollaries, we obtain: (1) Starting from a PCP Theorem with soundness error bounded away from 1, we get a PCP with arbitrarily small constant soundness error. In particular, starting with the combinatorial PCP of Dinur, we get a combinatorial PCP with low error. The latter can be used for hardness of approximation as in the work of Hastad. (2) Starting from the work of the author and Raz, we get a projection PCP theorem with the smallest soundness error known today. The theorem yields nearly a quadratic improvement in the size compared to previous work. We then discuss the problem of derandomizing parallel repetition, and the limitations of the fortification idea in this setting. We point out a connection between the problem of derandomizing parallel repetition and the problem of composition. This connection could shed light on the so-called Projection Games Conjecture, which asks for projection PCP with minimal error.National Science Foundation (U.S.) (Grant 1218547
Finding the Minimum-Weight k-Path
Given a weighted -vertex graph with integer edge-weights taken from a
range , we show that the minimum-weight simple path visiting
vertices can be found in time \tilde{O}(2^k \poly(k) M n^\omega) = O^*(2^k
M). If the weights are reals in , we provide a
-approximation which has a running time of \tilde{O}(2^k
\poly(k) n^\omega(\log\log M + 1/\varepsilon)). For the more general problem
of -tree, in which we wish to find a minimum-weight copy of a -node tree
in a given weighted graph , under the same restrictions on edge weights
respectively, we give an exact solution of running time \tilde{O}(2^k \poly(k)
M n^3) and a -approximate solution of running time
\tilde{O}(2^k \poly(k) n^3(\log\log M + 1/\varepsilon)). All of the above
algorithms are randomized with a polynomially-small error probability.Comment: To appear at WADS 201
Inapproximability of Maximum Biclique Problems, Minimum -Cut and Densest At-Least--Subgraph from the Small Set Expansion Hypothesis
The Small Set Expansion Hypothesis (SSEH) is a conjecture which roughly
states that it is NP-hard to distinguish between a graph with a small subset of
vertices whose edge expansion is almost zero and one in which all small subsets
of vertices have expansion almost one. In this work, we prove inapproximability
results for the following graph problems based on this hypothesis:
- Maximum Edge Biclique (MEB): given a bipartite graph , find a complete
bipartite subgraph of with maximum number of edges.
- Maximum Balanced Biclique (MBB): given a bipartite graph , find a
balanced complete bipartite subgraph of with maximum number of vertices.
- Minimum -Cut: given a weighted graph , find a set of edges with
minimum total weight whose removal partitions into connected
components.
- Densest At-Least--Subgraph (DALS): given a weighted graph , find a
set of at least vertices such that the induced subgraph on has
maximum density (the ratio between the total weight of edges and the number of
vertices).
We show that, assuming SSEH and NP BPP, no polynomial time
algorithm gives -approximation for MEB or MBB for every
constant . Moreover, assuming SSEH, we show that it is NP-hard
to approximate Minimum -Cut and DALS to within factor
of the optimum for every constant .
The ratios in our results are essentially tight since trivial algorithms give
-approximation to both MEB and MBB and efficient -approximation
algorithms are known for Minimum -Cut [SV95] and DALS [And07, KS09].
Our first result is proved by combining a technique developed by Raghavendra
et al. [RST12] to avoid locality of gadget reductions with a generalization of
Bansal and Khot's long code test [BK09] whereas our second result is shown via
elementary reductions.Comment: A preliminary version of this work will appear at ICALP 2017 under a
different title "Inapproximability of Maximum Edge Biclique, Maximum Balanced
Biclique and Minimum k-Cut from the Small Set Expansion Hypothesis
Deterministic parallel algorithms for bilinear objective functions
Many randomized algorithms can be derandomized efficiently using either the
method of conditional expectations or probability spaces with low independence.
A series of papers, beginning with work by Luby (1988), showed that in many
cases these techniques can be combined to give deterministic parallel (NC)
algorithms for a variety of combinatorial optimization problems, with low time-
and processor-complexity.
We extend and generalize a technique of Luby for efficiently handling
bilinear objective functions. One noteworthy application is an NC algorithm for
maximal independent set. On a graph with edges and vertices, this
takes time and processors, nearly
matching the best randomized parallel algorithms. Other applications include
reduced processor counts for algorithms of Berger (1997) for maximum acyclic
subgraph and Gale-Berlekamp switching games.
This bilinear factorization also gives better algorithms for problems
involving discrepancy. An important application of this is to automata-fooling
probability spaces, which are the basis of a notable derandomization technique
of Sivakumar (2002). Our method leads to large reduction in processor
complexity for a number of derandomization algorithms based on
automata-fooling, including set discrepancy and the Johnson-Lindenstrauss
Lemma
- …