69 research outputs found
A Birthday Repetition Theorem and Complexity of Approximating Dense CSPs
A -birthday repetition of a
two-prover game is a game in which the two provers are sent
random sets of questions from of sizes and respectively.
These two sets are sampled independently uniformly among all sets of questions
of those particular sizes. We prove the following birthday repetition theorem:
when satisfies some mild conditions, decreases exponentially in where is the total number of
questions. Our result positively resolves an open question posted by Aaronson,
Impagliazzo and Moshkovitz (CCC 2014).
As an application of our birthday repetition theorem, we obtain new
fine-grained hardness of approximation results for dense CSPs. Specifically, we
establish a tight trade-off between running time and approximation ratio for
dense CSPs by showing conditional lower bounds, integrality gaps and
approximation algorithms. In particular, for any sufficiently large and for
every , we show the following results:
- We exhibit an -approximation algorithm for dense Max -CSPs
with alphabet size via -level of Sherali-Adams relaxation.
- Through our birthday repetition theorem, we obtain an integrality gap of
for -level Lasserre relaxation for fully-dense Max
-CSP.
- Assuming that there is a constant such that Max 3SAT cannot
be approximated to within of the optimal in sub-exponential
time, our birthday repetition theorem implies that any algorithm that
approximates fully-dense Max -CSP to within a factor takes
time, almost tightly matching the algorithmic
result based on Sherali-Adams relaxation.Comment: 45 page
AM with Multiple Merlins
We introduce and study a new model of interactive proofs: AM(k), or
Arthur-Merlin with k non-communicating Merlins. Unlike with the better-known
MIP, here the assumption is that each Merlin receives an independent random
challenge from Arthur. One motivation for this model (which we explore in
detail) comes from the close analogies between it and the quantum complexity
class QMA(k), but the AM(k) model is also natural in its own right.
We illustrate the power of multiple Merlins by giving an AM(2) protocol for
3SAT, in which the Merlins' challenges and responses consist of only
n^{1/2+o(1)} bits each. Our protocol has the consequence that, assuming the
Exponential Time Hypothesis (ETH), any algorithm for approximating a dense CSP
with a polynomial-size alphabet must take n^{(log n)^{1-o(1)}} time. Algorithms
nearly matching this lower bound are known, but their running times had never
been previously explained. Brandao and Harrow have also recently used our 3SAT
protocol to show quasipolynomial hardness for approximating the values of
certain entangled games.
In the other direction, we give a simple quasipolynomial-time approximation
algorithm for free games, and use it to prove that, assuming the ETH, our 3SAT
protocol is essentially optimal. More generally, we show that multiple Merlins
never provide more than a polynomial advantage over one: that is, AM(k)=AM for
all k=poly(n). The key to this result is a subsampling theorem for free games,
which follows from powerful results by Alon et al. and Barak et al. on
subsampling dense CSPs, and which says that the value of any free game can be
closely approximated by the value of a logarithmic-sized random subgame.Comment: 48 page
From Gap-ETH to FPT-Inapproximability: Clique, Dominating Set, and More
We consider questions that arise from the intersection between the areas of
polynomial-time approximation algorithms, subexponential-time algorithms, and
fixed-parameter tractable algorithms. The questions, which have been asked
several times (e.g., [Marx08, FGMS12, DF13]), are whether there is a
non-trivial FPT-approximation algorithm for the Maximum Clique (Clique) and
Minimum Dominating Set (DomSet) problems parameterized by the size of the
optimal solution. In particular, letting be the optimum and be
the size of the input, is there an algorithm that runs in
time and outputs a solution of size
, for any functions and that are independent of (for
Clique, we want )?
In this paper, we show that both Clique and DomSet admit no non-trivial
FPT-approximation algorithm, i.e., there is no
-FPT-approximation algorithm for Clique and no
-FPT-approximation algorithm for DomSet, for any function
(e.g., this holds even if is the Ackermann function). In fact, our results
imply something even stronger: The best way to solve Clique and DomSet, even
approximately, is to essentially enumerate all possibilities. Our results hold
under the Gap Exponential Time Hypothesis (Gap-ETH) [Dinur16, MR16], which
states that no -time algorithm can distinguish between a satisfiable
3SAT formula and one which is not even -satisfiable for some
constant .
Besides Clique and DomSet, we also rule out non-trivial FPT-approximation for
Maximum Balanced Biclique, Maximum Subgraphs with Hereditary Properties, and
Maximum Induced Matching in bipartite graphs. Additionally, we rule out
-FPT-approximation algorithm for Densest -Subgraph although this
ratio does not yet match the trivial -approximation algorithm.Comment: 43 pages. To appear in FOCS'1
ETH-Hardness of Approximating 2-CSPs and Directed Steiner Network
We study the 2-ary constraint satisfaction problems (2-CSPs), which can be
stated as follows: given a constraint graph , an alphabet set
and, for each , a constraint , the goal is to find an assignment
that satisfies as many constraints as possible, where a constraint is
satisfied if .
While the approximability of 2-CSPs is quite well understood when
is constant, many problems are still open when becomes super
constant. One such problem is whether it is hard to approximate 2-CSPs to
within a polynomial factor of . Bellare et al. (1993) suggested
that the answer to this question might be positive. Alas, despite efforts to
resolve this conjecture, it remains open to this day.
In this work, we separate and and ask a related but weaker
question: is it hard to approximate 2-CSPs to within a polynomial factor of
(while may be super-polynomial in )? Assuming the
exponential time hypothesis (ETH), we answer this question positively by
showing that no polynomial time algorithm can approximate 2-CSPs to within a
factor of . Note that our ratio is almost linear, which is
almost optimal as a trivial algorithm gives a -approximation for 2-CSPs.
Thanks to a known reduction, our result implies an ETH-hardness of
approximating Directed Steiner Network with ratio where is
the number of demand pairs. The ratio is roughly the square root of the best
known ratio achieved by polynomial time algorithms (Chekuri et al., 2011;
Feldman et al., 2012).
Additionally, under Gap-ETH, our reduction for 2-CSPs not only rules out
polynomial time algorithms, but also FPT algorithms parameterized by .
Similar statement applies for DSN parameterized by .Comment: 36 pages. A preliminary version appeared in ITCS'1
Inapproximability of Maximum Biclique Problems, Minimum -Cut and Densest At-Least--Subgraph from the Small Set Expansion Hypothesis
The Small Set Expansion Hypothesis (SSEH) is a conjecture which roughly
states that it is NP-hard to distinguish between a graph with a small subset of
vertices whose edge expansion is almost zero and one in which all small subsets
of vertices have expansion almost one. In this work, we prove inapproximability
results for the following graph problems based on this hypothesis:
- Maximum Edge Biclique (MEB): given a bipartite graph , find a complete
bipartite subgraph of with maximum number of edges.
- Maximum Balanced Biclique (MBB): given a bipartite graph , find a
balanced complete bipartite subgraph of with maximum number of vertices.
- Minimum -Cut: given a weighted graph , find a set of edges with
minimum total weight whose removal partitions into connected
components.
- Densest At-Least--Subgraph (DALS): given a weighted graph , find a
set of at least vertices such that the induced subgraph on has
maximum density (the ratio between the total weight of edges and the number of
vertices).
We show that, assuming SSEH and NP BPP, no polynomial time
algorithm gives -approximation for MEB or MBB for every
constant . Moreover, assuming SSEH, we show that it is NP-hard
to approximate Minimum -Cut and DALS to within factor
of the optimum for every constant .
The ratios in our results are essentially tight since trivial algorithms give
-approximation to both MEB and MBB and efficient -approximation
algorithms are known for Minimum -Cut [SV95] and DALS [And07, KS09].
Our first result is proved by combining a technique developed by Raghavendra
et al. [RST12] to avoid locality of gadget reductions with a generalization of
Bansal and Khot's long code test [BK09] whereas our second result is shown via
elementary reductions.Comment: A preliminary version of this work will appear at ICALP 2017 under a
different title "Inapproximability of Maximum Edge Biclique, Maximum Balanced
Biclique and Minimum k-Cut from the Small Set Expansion Hypothesis
New Tools and Connections for Exponential-Time Approximation
In this paper, we develop new tools and connections for exponential time approximation. In this setting, we are given a problem instance and an integer r>1, and the goal is to design an approximation algorithm with the fastest possible running time. We give randomized algorithms that establish an approximation ratio of
1.
r for maximum independent set in O∗(exp(O~(n/rlog2r+rlog2r)))
time,
2.
r for chromatic number in O∗(exp(O~(n/rlogr+rlog2r)))
time,
3.
(2−1/r)
for minimum vertex cover in O∗(exp(n/rΩ(r)))
time, and
4.
(k−1/r)
for minimum k-hypergraph vertex cover in O∗(exp(n/(kr)Ω(kr)))
time.
(Throughout, O~
and O∗ omit polyloglog(r) and factors polynomial in the input size, respectively.) The best known time bounds for all problems were O∗(2n/r) (Bourgeois et al. in Discret Appl Math 159(17):1954–1970, 2011; Cygan et al. in Exponential-time approximation of hard problems, 2008). For maximum independent set and chromatic number, these bounds were complemented by exp(n1−o(1)/r1+o(1)) lower bounds (under the Exponential Time Hypothesis (ETH)) (Chalermsook et al. in Foundations of computer science, FOCS, pp. 370–379, 2013; Laekhanukit in Inapproximability of combinatorial problems in subexponential-time. Ph.D. thesis, 2014). Our results show that the naturally-looking O∗(2n/r) bounds are not tight for all these problems. The key to these results is a sparsification procedure that reduces a problem to a bounded-degree variant, allowing the use of approximation algorithms for bounded-degree graphs. To obtain the first two results, we introduce a new randomized branching rule. Finally, we show a connection between PCP parameters and exponential-time approximation algorithms. This connection together with our independent set algorithm refute the possibility to overly reduce the size of Chan’s PCP (Chan in J. ACM 63(3):27:1–27:32, 2016). It also implies that a (significant) improvement over our result will refute the gap-ETH conjecture (Dinur in Electron Colloq Comput Complex (ECCC) 23:128, 2016; Manurangsi and Raghavendra in A birthday repetition theorem and complexity of approximating dense CSPs, 2016)
Imperfect Gaps in Gap-ETH and PCPs
We study the role of perfect completeness in probabilistically checkable proof systems (PCPs) and give a way to transform a PCP with imperfect completeness to one with perfect completeness, when the initial gap is a constant. We show that PCP_{c,s}[r,q] subseteq PCP_{1,s\u27}[r+O(1),q+O(r)] for c-s=Omega(1) which in turn implies that one can convert imperfect completeness to perfect in linear-sized PCPs for NP with a O(log n) additive loss in the query complexity q. We show our result by constructing a "robust circuit" using threshold gates. These results are a gap amplification procedure for PCPs, (when completeness is not 1) analogous to questions studied in parallel repetition [Anup Rao, 2011] and pseudorandomness [David Gillman, 1998] and might be of independent interest.
We also investigate the time-complexity of approximating perfectly satisfiable instances of 3SAT versus those with imperfect completeness. We show that the Gap-ETH conjecture without perfect completeness is equivalent to Gap-ETH with perfect completeness, i.e. MAX 3SAT(1-epsilon,1-delta), delta > epsilon has 2^{o(n)} algorithms if and only if MAX 3SAT(1,1-delta) has 2^{o(n)} algorithms. We also relate the time complexities of these two problems in a more fine-grained way to show that T_2(n) <= T_1(n(log log n)^{O(1)}), where T_1(n),T_2(n) denote the randomized time-complexity of approximating MAX 3SAT with perfect and imperfect completeness respectively
Inapproximability of Maximum Edge Biclique, Maximum Balanced Biclique and Minimum k-Cut from the Small Set Expansion Hypothesis
The Small Set Expansion Hypothesis (SSEH) is a conjecture which roughly states that it is NP-hard to distinguish between a graph with a small set of vertices whose expansion is almost zero and one in which all small sets of vertices have expansion almost one. In this work, we prove conditional inapproximability results for the following graph problems based on this hypothesis:
- Maximum Edge Biclique (MEB): given a bipartite graph G, find a complete bipartite subgraph of G with maximum number of edges. We show that, assuming SSEH and that NP != BPP, no polynomial time algorithm gives n^{1 - epsilon}-approximation for MEB for every constant epsilon > 0.
- Maximum Balanced Biclique (MBB): given a bipartite graph G, find a balanced complete bipartite subgraph of G with maximum number of vertices. Similar to MEB, we prove n^{1 - epsilon} ratio inapproximability for MBB for every epsilon > 0, assuming SSEH and that NP != BPP.
- Minimum k-Cut: given a weighted graph G, find a set of edges with minimum total weight whose removal splits the graph into k components. We prove that this problem is NP-hard to approximate to within (2 - epsilon) factor of the optimum for every epsilon > 0, assuming SSEH.
The ratios in our results are essentially tight since trivial algorithms give n-approximation to both MEB and MBB and 2-approximation algorithms are known for Minimum k-Cut [Saran and Vazirani, SIAM J. Comput., 1995].
Our first two results are proved by combining a technique developed by Raghavendra, Steurer and Tulsiani [Raghavendra et al., CCC, 2012] to avoid locality of gadget reductions with a generalization of Bansal and Khot\u27s long code test [Bansal and Khot, FOCS, 2009] whereas our last result is shown via an elementary reduction
Tight Hardness Results for Training Depth-2 ReLU Networks
We prove several hardness results for training depth-2 neural networks with
the ReLU activation function; these networks are simply weighted sums (that may
include negative coefficients) of ReLUs. Our goal is to output a depth-2 neural
network that minimizes the square loss with respect to a given training set. We
prove that this problem is NP-hard already for a network with a single ReLU. We
also prove NP-hardness for outputting a weighted sum of ReLUs minimizing
the squared error (for ) even in the realizable setting (i.e., when the
labels are consistent with an unknown depth-2 ReLU network). We are also able
to obtain lower bounds on the running time in terms of the desired additive
error . To obtain our lower bounds, we use the Gap Exponential Time
Hypothesis (Gap-ETH) as well as a new hypothesis regarding the hardness of
approximating the well known Densest -Subgraph problem in
subexponential time (these hypotheses are used separately in proving different
lower bounds). For example, we prove that under reasonable hardness
assumptions, any proper learning algorithm for finding the best fitting ReLU
must run in time exponential in . Together with a previous work
regarding improperly learning a ReLU (Goel et al., COLT'17), this implies the
first separation between proper and improper algorithms for learning a ReLU. We
also study the problem of properly learning a depth-2 network of ReLUs with
bounded weights giving new (worst-case) upper bounds on the running time needed
to learn such networks both in the realizable and agnostic settings. Our upper
bounds on the running time essentially matches our lower bounds in terms of the
dependency on .Comment: To appear in ITCS'2
- …