42,185 research outputs found
A New View on Worst-Case to Average-Case Reductions for NP Problems
We study the result by Bogdanov and Trevisan (FOCS, 2003), who show that
under reasonable assumptions, there is no non-adaptive worst-case to
average-case reduction that bases the average-case hardness of an NP-problem on
the worst-case complexity of an NP-complete problem. We replace the hiding and
the heavy samples protocol in [BT03] by employing the histogram verification
protocol of Haitner, Mahmoody and Xiao (CCC, 2010), which proves to be very
useful in this context. Once the histogram is verified, our hiding protocol is
directly public-coin, whereas the intuition behind the original protocol
inherently relies on private coins
Average-Case Complexity
We survey the average-case complexity of problems in NP.
We discuss various notions of good-on-average algorithms, and present
completeness results due to Impagliazzo and Levin. Such completeness results
establish the fact that if a certain specific (but somewhat artificial) NP
problem is easy-on-average with respect to the uniform distribution, then all
problems in NP are easy-on-average with respect to all samplable distributions.
Applying the theory to natural distributional problems remain an outstanding
open question. We review some natural distributional problems whose
average-case complexity is of particular interest and that do not yet fit into
this theory.
A major open question whether the existence of hard-on-average problems in NP
can be based on the PNP assumption or on related worst-case assumptions.
We review negative results showing that certain proof techniques cannot prove
such a result. While the relation between worst-case and average-case
complexity for general NP problems remains open, there has been progress in
understanding the relation between different ``degrees'' of average-case
complexity. We discuss some of these ``hardness amplification'' results
Inapproximability of Maximum Biclique Problems, Minimum -Cut and Densest At-Least--Subgraph from the Small Set Expansion Hypothesis
The Small Set Expansion Hypothesis (SSEH) is a conjecture which roughly
states that it is NP-hard to distinguish between a graph with a small subset of
vertices whose edge expansion is almost zero and one in which all small subsets
of vertices have expansion almost one. In this work, we prove inapproximability
results for the following graph problems based on this hypothesis:
- Maximum Edge Biclique (MEB): given a bipartite graph , find a complete
bipartite subgraph of with maximum number of edges.
- Maximum Balanced Biclique (MBB): given a bipartite graph , find a
balanced complete bipartite subgraph of with maximum number of vertices.
- Minimum -Cut: given a weighted graph , find a set of edges with
minimum total weight whose removal partitions into connected
components.
- Densest At-Least--Subgraph (DALS): given a weighted graph , find a
set of at least vertices such that the induced subgraph on has
maximum density (the ratio between the total weight of edges and the number of
vertices).
We show that, assuming SSEH and NP BPP, no polynomial time
algorithm gives -approximation for MEB or MBB for every
constant . Moreover, assuming SSEH, we show that it is NP-hard
to approximate Minimum -Cut and DALS to within factor
of the optimum for every constant .
The ratios in our results are essentially tight since trivial algorithms give
-approximation to both MEB and MBB and efficient -approximation
algorithms are known for Minimum -Cut [SV95] and DALS [And07, KS09].
Our first result is proved by combining a technique developed by Raghavendra
et al. [RST12] to avoid locality of gadget reductions with a generalization of
Bansal and Khot's long code test [BK09] whereas our second result is shown via
elementary reductions.Comment: A preliminary version of this work will appear at ICALP 2017 under a
different title "Inapproximability of Maximum Edge Biclique, Maximum Balanced
Biclique and Minimum k-Cut from the Small Set Expansion Hypothesis
Separating Two-Round Secure Computation From Oblivious Transfer
We consider the question of minimizing the round complexity of protocols for secure multiparty computation (MPC) with security against an arbitrary number of semi-honest parties. Very recently, Garg and Srinivasan (Eurocrypt 2018) and Benhamouda and Lin (Eurocrypt 2018) constructed such 2-round MPC protocols from minimal assumptions. This was done by showing a round preserving reduction to the task of secure 2-party computation of the oblivious transfer functionality (OT). These constructions made a novel non-black-box use of the underlying OT protocol. The question remained whether this can be done by only making black-box use of 2-round OT. This is of theoretical and potentially also practical value as black-box use of primitives tends to lead to more efficient constructions.
Our main result proves that such a black-box construction is impossible, namely that non-black-box use of OT is necessary. As a corollary, a similar separation holds when starting with any 2-party functionality other than OT.
As a secondary contribution, we prove several additional results that further clarify the landscape of black-box MPC with minimal interaction. In particular, we complement the separation from 2-party functionalities by presenting a complete 4-party functionality, give evidence for the difficulty of ruling out a complete 3-party functionality and for the difficulty of ruling out black-box constructions of 3-round MPC from 2-round OT, and separate a relaxed "non-compact" variant of 2-party homomorphic secret sharing from 2-round OT
Approaching MCSP from Above and Below: Hardness for a Conditional Variant and AC^0[p]
The Minimum Circuit Size Problem (MCSP) asks whether a given Boolean function has a circuit of at most a given size. MCSP has been studied for over a half-century and has deep connections throughout theoretical computer science including to cryptography, computational learning theory, and proof complexity. For example, we know (informally) that if MCSP is easy to compute, then most cryptography can be broken. Despite this cryptographic hardness connection and extensive research, we still know relatively little about the hardness of MCSP unconditionally. Indeed, until very recently it was unknown whether MCSP can be computed in AC^0[2] (Golovnev et al., ICALP 2019).
Our main contribution in this paper is to formulate a new "oracle" variant of circuit complexity and prove that this problem is NP-complete under randomized reductions. In more detail, we define the Minimum Oracle Circuit Size Problem (MOCSP) that takes as input the truth table of a Boolean function f, a size threshold s, and the truth table of an oracle Boolean function O, and determines whether there is a circuit with O-oracle gates and at most s wires that computes f. We prove that MOCSP is NP-complete under randomized polynomial-time reductions.
We also extend the recent AC^0[p] lower bound against MCSP by Golovnev et al. to a lower bound against the circuit minimization problem for depth-d formulas, (AC^0_d)-MCSP. We view this result as primarily a technical contribution. In particular, our proof takes a radically different approach from prior MCSP-related hardness results
Resource Bounded Immunity and Simplicity
Revisiting the thirty years-old notions of resource-bounded immunity and
simplicity, we investigate the structural characteristics of various immunity
notions: strong immunity, almost immunity, and hyperimmunity as well as their
corresponding simplicity notions. We also study limited immunity and
simplicity, called k-immunity and feasible k-immunity, and their simplicity
notions. Finally, we propose the k-immune hypothesis as a working hypothesis
that guarantees the existence of simple sets in NP.Comment: This is a complete version of the conference paper that appeared in
the Proceedings of the 3rd IFIP International Conference on Theoretical
Computer Science, Kluwer Academic Publishers, pp.81-95, Toulouse, France,
August 23-26, 200
From Gap-ETH to FPT-Inapproximability: Clique, Dominating Set, and More
We consider questions that arise from the intersection between the areas of
polynomial-time approximation algorithms, subexponential-time algorithms, and
fixed-parameter tractable algorithms. The questions, which have been asked
several times (e.g., [Marx08, FGMS12, DF13]), are whether there is a
non-trivial FPT-approximation algorithm for the Maximum Clique (Clique) and
Minimum Dominating Set (DomSet) problems parameterized by the size of the
optimal solution. In particular, letting be the optimum and be
the size of the input, is there an algorithm that runs in
time and outputs a solution of size
, for any functions and that are independent of (for
Clique, we want )?
In this paper, we show that both Clique and DomSet admit no non-trivial
FPT-approximation algorithm, i.e., there is no
-FPT-approximation algorithm for Clique and no
-FPT-approximation algorithm for DomSet, for any function
(e.g., this holds even if is the Ackermann function). In fact, our results
imply something even stronger: The best way to solve Clique and DomSet, even
approximately, is to essentially enumerate all possibilities. Our results hold
under the Gap Exponential Time Hypothesis (Gap-ETH) [Dinur16, MR16], which
states that no -time algorithm can distinguish between a satisfiable
3SAT formula and one which is not even -satisfiable for some
constant .
Besides Clique and DomSet, we also rule out non-trivial FPT-approximation for
Maximum Balanced Biclique, Maximum Subgraphs with Hereditary Properties, and
Maximum Induced Matching in bipartite graphs. Additionally, we rule out
-FPT-approximation algorithm for Densest -Subgraph although this
ratio does not yet match the trivial -approximation algorithm.Comment: 43 pages. To appear in FOCS'1
- …