267 research outputs found
Exact Bounds for Some Hypergraph Saturation Problems
Let W_n(p,q) denote the minimum number of edges in an n x n bipartite graph G
on vertex sets X,Y that satisfies the following condition; one can add the
edges between X and Y that do not belong to G one after the other so that
whenever a new edge is added, a new copy of K_{p,q} is created. The problem of
bounding W_n(p,q), and its natural hypergraph generalization, was introduced by
Balogh, Bollob\'as, Morris and Riordan. Their main result, specialized to
graphs, used algebraic methods to determine W_n(1,q).
Our main results in this paper give exact bounds for W_n(p,q), its hypergraph
analogue, as well as for a new variant of Bollob\'as's Two Families theorem. In
particular, we completely determine W_n(p,q), showing that if 1 <= p <= q <= n
then
W_n(p,q) = n^2 - (n-p+1)^2 + (q-p)^2.
Our proof applies a reduction to a multi-partite version of the Two Families
theorem obtained by Alon. While the reduction is combinatorial, the main idea
behind it is algebraic
Entropy Samplers and Strong Generic Lower Bounds For Space Bounded Learning
With any hypothesis class one can associate a bipartite graph whose vertices are the hypotheses H on one side and all possible labeled examples X on the other side, and an hypothesis is connected to all the labeled examples that are consistent with it. We call this graph the hypotheses graph. We prove that any hypothesis class whose hypotheses graph is mixing cannot be learned using less than Omega(log^2 |H|) memory bits unless the learner uses at least a large number |H|^Omega(1) labeled examples. Our work builds on a combinatorial framework that we suggested in a previous work for proving lower bounds on space bounded learning. The strong lower bound is obtained by defining a new notion of pseudorandomness, the entropy sampler. Raz obtained a similar result using different ideas
Approximating Dense Max 2-CSPs
In this paper, we present a polynomial-time algorithm that approximates
sufficiently high-value Max 2-CSPs on sufficiently dense graphs to within
approximation ratio for any constant .
Using this algorithm, we also achieve similar results for free games,
projection games on sufficiently dense random graphs, and the Densest
-Subgraph problem with sufficiently dense optimal solution. Note, however,
that algorithms with similar guarantees to the last algorithm were in fact
discovered prior to our work by Feige et al. and Suzuki and Tokuyama.
In addition, our idea for the above algorithms yields the following
by-product: a quasi-polynomial time approximation scheme (QPTAS) for
satisfiable dense Max 2-CSPs with better running time than the known
algorithms
A No-Go Theorem for Derandomized Parallel Repetition: Beyond Feige-Kilian
In this work we show a barrier towards proving a randomness-efficient
parallel repetition, a promising avenue for achieving many tight
inapproximability results. Feige and Kilian (STOC'95) proved an impossibility
result for randomness-efficient parallel repetition for two prover games with
small degree, i.e., when each prover has only few possibilities for the
question of the other prover. In recent years, there have been indications that
randomness-efficient parallel repetition (also called derandomized parallel
repetition) might be possible for games with large degree, circumventing the
impossibility result of Feige and Kilian. In particular, Dinur and Meir
(CCC'11) construct games with large degree whose repetition can be derandomized
using a theorem of Impagliazzo, Kabanets and Wigderson (SICOMP'12). However,
obtaining derandomized parallel repetition theorems that would yield optimal
inapproximability results has remained elusive.
This paper presents an explanation for the current impasse in progress, by
proving a limitation on derandomized parallel repetition. We formalize two
properties which we call "fortification-friendliness" and "yields robust
embeddings." We show that any proof of derandomized parallel repetition
achieving almost-linear blow-up cannot both (a) be fortification-friendly and
(b) yield robust embeddings. Unlike Feige and Kilian, we do not require the
small degree assumption.
Given that virtually all existing proofs of parallel repetition, including
the derandomized parallel repetition result of Dinur and Meir, share these two
properties, our no-go theorem highlights a major barrier to achieving
almost-linear derandomized parallel repetition
- …