81 research outputs found
Inapproximability of Maximum Biclique Problems, Minimum -Cut and Densest At-Least--Subgraph from the Small Set Expansion Hypothesis
The Small Set Expansion Hypothesis (SSEH) is a conjecture which roughly
states that it is NP-hard to distinguish between a graph with a small subset of
vertices whose edge expansion is almost zero and one in which all small subsets
of vertices have expansion almost one. In this work, we prove inapproximability
results for the following graph problems based on this hypothesis:
- Maximum Edge Biclique (MEB): given a bipartite graph , find a complete
bipartite subgraph of with maximum number of edges.
- Maximum Balanced Biclique (MBB): given a bipartite graph , find a
balanced complete bipartite subgraph of with maximum number of vertices.
- Minimum -Cut: given a weighted graph , find a set of edges with
minimum total weight whose removal partitions into connected
components.
- Densest At-Least--Subgraph (DALS): given a weighted graph , find a
set of at least vertices such that the induced subgraph on has
maximum density (the ratio between the total weight of edges and the number of
vertices).
We show that, assuming SSEH and NP BPP, no polynomial time
algorithm gives -approximation for MEB or MBB for every
constant . Moreover, assuming SSEH, we show that it is NP-hard
to approximate Minimum -Cut and DALS to within factor
of the optimum for every constant .
The ratios in our results are essentially tight since trivial algorithms give
-approximation to both MEB and MBB and efficient -approximation
algorithms are known for Minimum -Cut [SV95] and DALS [And07, KS09].
Our first result is proved by combining a technique developed by Raghavendra
et al. [RST12] to avoid locality of gadget reductions with a generalization of
Bansal and Khot's long code test [BK09] whereas our second result is shown via
elementary reductions.Comment: A preliminary version of this work will appear at ICALP 2017 under a
different title "Inapproximability of Maximum Edge Biclique, Maximum Balanced
Biclique and Minimum k-Cut from the Small Set Expansion Hypothesis
Low-Rank Matrix Approximation with Weights or Missing Data is NP-hard
Weighted low-rank approximation (WLRA), a dimensionality reduction technique
for data analysis, has been successfully used in several applications, such as
in collaborative filtering to design recommender systems or in computer vision
to recover structure from motion. In this paper, we study the computational
complexity of WLRA and prove that it is NP-hard to find an approximate
solution, even when a rank-one approximation is sought. Our proofs are based on
a reduction from the maximum-edge biclique problem, and apply to strictly
positive weights as well as binary weights (the latter corresponding to
low-rank matrix approximation with missing data).Comment: Proof of Lemma 4 (Lemma 3 in v1) has been corrected. Some remarks and
comments have been added. Accepted in SIAM Journal on Matrix Analysis and
Application
From Gap-ETH to FPT-Inapproximability: Clique, Dominating Set, and More
We consider questions that arise from the intersection between the areas of
polynomial-time approximation algorithms, subexponential-time algorithms, and
fixed-parameter tractable algorithms. The questions, which have been asked
several times (e.g., [Marx08, FGMS12, DF13]), are whether there is a
non-trivial FPT-approximation algorithm for the Maximum Clique (Clique) and
Minimum Dominating Set (DomSet) problems parameterized by the size of the
optimal solution. In particular, letting be the optimum and be
the size of the input, is there an algorithm that runs in
time and outputs a solution of size
, for any functions and that are independent of (for
Clique, we want )?
In this paper, we show that both Clique and DomSet admit no non-trivial
FPT-approximation algorithm, i.e., there is no
-FPT-approximation algorithm for Clique and no
-FPT-approximation algorithm for DomSet, for any function
(e.g., this holds even if is the Ackermann function). In fact, our results
imply something even stronger: The best way to solve Clique and DomSet, even
approximately, is to essentially enumerate all possibilities. Our results hold
under the Gap Exponential Time Hypothesis (Gap-ETH) [Dinur16, MR16], which
states that no -time algorithm can distinguish between a satisfiable
3SAT formula and one which is not even -satisfiable for some
constant .
Besides Clique and DomSet, we also rule out non-trivial FPT-approximation for
Maximum Balanced Biclique, Maximum Subgraphs with Hereditary Properties, and
Maximum Induced Matching in bipartite graphs. Additionally, we rule out
-FPT-approximation algorithm for Densest -Subgraph although this
ratio does not yet match the trivial -approximation algorithm.Comment: 43 pages. To appear in FOCS'1
A Survey on Approximation in Parameterized Complexity: Hardness and Algorithms
Parameterization and approximation are two popular ways of coping with
NP-hard problems. More recently, the two have also been combined to derive many
interesting results. We survey developments in the area both from the
algorithmic and hardness perspectives, with emphasis on new techniques and
potential future research directions
The Strongish Planted Clique Hypothesis and Its Consequences
We formulate a new hardness assumption, the Strongish Planted Clique Hypothesis (SPCH), which postulates that any algorithm for planted clique must run in time n^?(log n) (so that the state-of-the-art running time of n^O(log n) is optimal up to a constant in the exponent).
We provide two sets of applications of the new hypothesis. First, we show that SPCH implies (nearly) tight inapproximability results for the following well-studied problems in terms of the parameter k: Densest k-Subgraph, Smallest k-Edge Subgraph, Densest k-Subhypergraph, Steiner k-Forest, and Directed Steiner Network with k terminal pairs. For example, we show, under SPCH, that no polynomial time algorithm achieves o(k)-approximation for Densest k-Subgraph. This inapproximability ratio improves upon the previous best k^o(1) factor from (Chalermsook et al., FOCS 2017). Furthermore, our lower bounds hold even against fixed-parameter tractable algorithms with parameter k.
Our second application focuses on the complexity of graph pattern detection. For both induced and non-induced graph pattern detection, we prove hardness results under SPCH, improving the running time lower bounds obtained by (Dalirrooyfard et al., STOC 2019) under the Exponential Time Hypothesis
Finding a Collective Set of Items: From Proportional Multirepresentation to Group Recommendation
We consider the following problem: There is a set of items (e.g., movies) and
a group of agents (e.g., passengers on a plane); each agent has some intrinsic
utility for each of the items. Our goal is to pick a set of items that
maximize the total derived utility of all the agents (i.e., in our example we
are to pick movies that we put on the plane's entertainment system).
However, the actual utility that an agent derives from a given item is only a
fraction of its intrinsic one, and this fraction depends on how the agent ranks
the item among the chosen, available, ones. We provide a formal specification
of the model and provide concrete examples and settings where it is applicable.
We show that the problem is hard in general, but we show a number of
tractability results for its natural special cases
- …