3,857 research outputs found

    Hardness Amplification of Optimization Problems

    Get PDF
    In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products. We say that an optimization problem ? is direct product feasible if it is possible to efficiently aggregate any k instances of ? and form one large instance of ? such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem ?, our hardness amplification theorem may be informally stated as follows: If there is a distribution D over instances of ? of size n such that every randomized algorithm running in time t(n) fails to solve ? on 1/?(n) fraction of inputs sampled from D, then, assuming some relationships on ?(n) and t(n), there is a distribution D\u27 over instances of ? of size O(n??(n)) such that every randomized algorithm running in time t(n)/poly(?(n)) fails to solve ? on 99/100 fraction of inputs sampled from D\u27. As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium

    A PCP Characterization of AM

    Get PDF
    We introduce a 2-round stochastic constraint-satisfaction problem, and show that its approximation version is complete for (the promise version of) the complexity class AM. This gives a `PCP characterization' of AM analogous to the PCP Theorem for NP. Similar characterizations have been given for higher levels of the Polynomial Hierarchy, and for PSPACE; however, we suggest that the result for AM might be of particular significance for attempts to derandomize this class. To test this notion, we pose some `Randomized Optimization Hypotheses' related to our stochastic CSPs that (in light of our result) would imply collapse results for AM. Unfortunately, the hypotheses appear over-strong, and we present evidence against them. In the process we show that, if some language in NP is hard-on-average against circuits of size 2^{Omega(n)}, then there exist hard-on-average optimization problems of a particularly elegant form. All our proofs use a powerful form of PCPs known as Probabilistically Checkable Proofs of Proximity, and demonstrate their versatility. We also use known results on randomness-efficient soundness- and hardness-amplification. In particular, we make essential use of the Impagliazzo-Wigderson generator; our analysis relies on a recent Chernoff-type theorem for expander walks.Comment: 18 page

    Independent Set, Induced Matching, and Pricing: Connections and Tight (Subexponential Time) Approximation Hardnesses

    Full text link
    We present a series of almost settled inapproximability results for three fundamental problems. The first in our series is the subexponential-time inapproximability of the maximum independent set problem, a question studied in the area of parameterized complexity. The second is the hardness of approximating the maximum induced matching problem on bounded-degree bipartite graphs. The last in our series is the tight hardness of approximating the k-hypergraph pricing problem, a fundamental problem arising from the area of algorithmic game theory. In particular, assuming the Exponential Time Hypothesis, our two main results are: - For any r larger than some constant, any r-approximation algorithm for the maximum independent set problem must run in at least 2^{n^{1-\epsilon}/r^{1+\epsilon}} time. This nearly matches the upper bound of 2^{n/r} (Cygan et al., 2008). It also improves some hardness results in the domain of parameterized complexity (e.g., Escoffier et al., 2012 and Chitnis et al., 2013) - For any k larger than some constant, there is no polynomial time min (k^{1-\epsilon}, n^{1/2-\epsilon})-approximation algorithm for the k-hypergraph pricing problem, where n is the number of vertices in an input graph. This almost matches the upper bound of min (O(k), \tilde O(\sqrt{n})) (by Balcan and Blum, 2007 and an algorithm in this paper). We note an interesting fact that, in contrast to n^{1/2-\epsilon} hardness for polynomial-time algorithms, the k-hypergraph pricing problem admits n^{\delta} approximation for any \delta >0 in quasi-polynomial time. This puts this problem in a rare approximability class in which approximability thresholds can be improved significantly by allowing algorithms to run in quasi-polynomial time.Comment: The full version of FOCS 201

    Average-Case Complexity

    Full text link
    We survey the average-case complexity of problems in NP. We discuss various notions of good-on-average algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easy-on-average with respect to the uniform distribution, then all problems in NP are easy-on-average with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose average-case complexity is of particular interest and that do not yet fit into this theory. A major open question whether the existence of hard-on-average problems in NP can be based on the P\neqNP assumption or on related worst-case assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worst-case and average-case complexity for general NP problems remains open, there has been progress in understanding the relation between different ``degrees'' of average-case complexity. We discuss some of these ``hardness amplification'' results

    Hamming Approximation of NP Witnesses

    Get PDF
    Given a satisfiable 3-SAT formula, how hard is it to find an assignment to the variables that has Hamming distance at most n/2 to a satisfying assignment? More generally, consider any polynomial-time verifier for any NP-complete language. A d(n)-Hamming-approximation algorithm for the verifier is one that, given any member x of the language, outputs in polynomial time a string a with Hamming distance at most d(n) to some witness w, where (x,w) is accepted by the verifier. Previous results have shown that, if P != NP, then every NP-complete language has a verifier for which there is no (n/2-n^(2/3+d))-Hamming-approximation algorithm, for various constants d > 0. Our main result is that, if P != NP, then every paddable NP-complete language has a verifier that admits no (n/2+O(sqrt(n log n)))-Hamming-approximation algorithm. That is, one cannot get even half the bits right. We also consider natural verifiers for various well-known NP-complete problems. They do have n/2-Hamming-approximation algorithms, but, if P != NP, have no (n/2-n^epsilon)-Hamming-approximation algorithms for any constant epsilon > 0. We show similar results for randomized algorithms

    Gap Amplification for Small-Set Expansion via Random Walks

    Get PDF
    In this work, we achieve gap amplification for the Small-Set Expansion problem. Specifically, we show that an instance of the Small-Set Expansion Problem with completeness ϵ\epsilon and soundness 12\frac{1}{2} is at least as difficult as Small-Set Expansion with completeness ϵ\epsilon and soundness f(ϵ)f(\epsilon), for any function f(ϵ)f(\epsilon) which grows faster than ϵ\sqrt{\epsilon}. We achieve this amplification via random walks -- our gadget is the graph with adjacency matrix corresponding to a random walk on the original graph. An interesting feature of our reduction is that unlike gap amplification via parallel repetition, the size of the instances (number of vertices) produced by the reduction remains the same

    Inapproximability of Maximum Biclique Problems, Minimum kk-Cut and Densest At-Least-kk-Subgraph from the Small Set Expansion Hypothesis

    Full text link
    The Small Set Expansion Hypothesis (SSEH) is a conjecture which roughly states that it is NP-hard to distinguish between a graph with a small subset of vertices whose edge expansion is almost zero and one in which all small subsets of vertices have expansion almost one. In this work, we prove inapproximability results for the following graph problems based on this hypothesis: - Maximum Edge Biclique (MEB): given a bipartite graph GG, find a complete bipartite subgraph of GG with maximum number of edges. - Maximum Balanced Biclique (MBB): given a bipartite graph GG, find a balanced complete bipartite subgraph of GG with maximum number of vertices. - Minimum kk-Cut: given a weighted graph GG, find a set of edges with minimum total weight whose removal partitions GG into kk connected components. - Densest At-Least-kk-Subgraph (DALkkS): given a weighted graph GG, find a set SS of at least kk vertices such that the induced subgraph on SS has maximum density (the ratio between the total weight of edges and the number of vertices). We show that, assuming SSEH and NP \nsubseteq BPP, no polynomial time algorithm gives n1εn^{1 - \varepsilon}-approximation for MEB or MBB for every constant ε>0\varepsilon > 0. Moreover, assuming SSEH, we show that it is NP-hard to approximate Minimum kk-Cut and DALkkS to within (2ε)(2 - \varepsilon) factor of the optimum for every constant ε>0\varepsilon > 0. The ratios in our results are essentially tight since trivial algorithms give nn-approximation to both MEB and MBB and efficient 22-approximation algorithms are known for Minimum kk-Cut [SV95] and DALkkS [And07, KS09]. Our first result is proved by combining a technique developed by Raghavendra et al. [RST12] to avoid locality of gadget reductions with a generalization of Bansal and Khot's long code test [BK09] whereas our second result is shown via elementary reductions.Comment: A preliminary version of this work will appear at ICALP 2017 under a different title "Inapproximability of Maximum Edge Biclique, Maximum Balanced Biclique and Minimum k-Cut from the Small Set Expansion Hypothesis

    Pseudorandomness for Approximate Counting and Sampling

    Get PDF
    We study computational procedures that use both randomness and nondeterminism. The goal of this paper is to derandomize such procedures under the weakest possible assumptions. Our main technical contribution allows one to “boost” a given hardness assumption: We show that if there is a problem in EXP that cannot be computed by poly-size nondeterministic circuits then there is one which cannot be computed by poly-size circuits that make non-adaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize Arthur-Merlin games (i.e., show AM = NP) are in fact all equivalent. We also define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NP-witnesses. We use the “boosting” theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM. We observe that Cai's proof that S_2^P ⊆ PP⊆(NP) and the learning algorithm of Bshouty et al. can be seen as reductions to sampling that are not probabilistic. As a consequence they can be derandomized under an assumption which is weaker than the assumption that was previously known to suffice
    corecore