31 research outputs found

    Inapproximability of Maximum Biclique Problems, Minimum kk-Cut and Densest At-Least-kk-Subgraph from the Small Set Expansion Hypothesis

    Full text link
    The Small Set Expansion Hypothesis (SSEH) is a conjecture which roughly states that it is NP-hard to distinguish between a graph with a small subset of vertices whose edge expansion is almost zero and one in which all small subsets of vertices have expansion almost one. In this work, we prove inapproximability results for the following graph problems based on this hypothesis: - Maximum Edge Biclique (MEB): given a bipartite graph GG, find a complete bipartite subgraph of GG with maximum number of edges. - Maximum Balanced Biclique (MBB): given a bipartite graph GG, find a balanced complete bipartite subgraph of GG with maximum number of vertices. - Minimum kk-Cut: given a weighted graph GG, find a set of edges with minimum total weight whose removal partitions GG into kk connected components. - Densest At-Least-kk-Subgraph (DALkkS): given a weighted graph GG, find a set SS of at least kk vertices such that the induced subgraph on SS has maximum density (the ratio between the total weight of edges and the number of vertices). We show that, assuming SSEH and NP \nsubseteq BPP, no polynomial time algorithm gives n1εn^{1 - \varepsilon}-approximation for MEB or MBB for every constant ε>0\varepsilon > 0. Moreover, assuming SSEH, we show that it is NP-hard to approximate Minimum kk-Cut and DALkkS to within (2ε)(2 - \varepsilon) factor of the optimum for every constant ε>0\varepsilon > 0. The ratios in our results are essentially tight since trivial algorithms give nn-approximation to both MEB and MBB and efficient 22-approximation algorithms are known for Minimum kk-Cut [SV95] and DALkkS [And07, KS09]. Our first result is proved by combining a technique developed by Raghavendra et al. [RST12] to avoid locality of gadget reductions with a generalization of Bansal and Khot's long code test [BK09] whereas our second result is shown via elementary reductions.Comment: A preliminary version of this work will appear at ICALP 2017 under a different title "Inapproximability of Maximum Edge Biclique, Maximum Balanced Biclique and Minimum k-Cut from the Small Set Expansion Hypothesis

    Inapproximability of Maximum Edge Biclique, Maximum Balanced Biclique and Minimum k-Cut from the Small Set Expansion Hypothesis

    Get PDF
    The Small Set Expansion Hypothesis (SSEH) is a conjecture which roughly states that it is NP-hard to distinguish between a graph with a small set of vertices whose expansion is almost zero and one in which all small sets of vertices have expansion almost one. In this work, we prove conditional inapproximability results for the following graph problems based on this hypothesis: - Maximum Edge Biclique (MEB): given a bipartite graph G, find a complete bipartite subgraph of G with maximum number of edges. We show that, assuming SSEH and that NP != BPP, no polynomial time algorithm gives n^{1 - epsilon}-approximation for MEB for every constant epsilon > 0. - Maximum Balanced Biclique (MBB): given a bipartite graph G, find a balanced complete bipartite subgraph of G with maximum number of vertices. Similar to MEB, we prove n^{1 - epsilon} ratio inapproximability for MBB for every epsilon > 0, assuming SSEH and that NP != BPP. - Minimum k-Cut: given a weighted graph G, find a set of edges with minimum total weight whose removal splits the graph into k components. We prove that this problem is NP-hard to approximate to within (2 - epsilon) factor of the optimum for every epsilon > 0, assuming SSEH. The ratios in our results are essentially tight since trivial algorithms give n-approximation to both MEB and MBB and 2-approximation algorithms are known for Minimum k-Cut [Saran and Vazirani, SIAM J. Comput., 1995]. Our first two results are proved by combining a technique developed by Raghavendra, Steurer and Tulsiani [Raghavendra et al., CCC, 2012] to avoid locality of gadget reductions with a generalization of Bansal and Khot\u27s long code test [Bansal and Khot, FOCS, 2009] whereas our last result is shown via an elementary reduction

    New Tools and Connections for Exponential-Time Approximation

    Get PDF
    In this paper, we develop new tools and connections for exponential time approximation. In this setting, we are given a problem instance and an integer r>1, and the goal is to design an approximation algorithm with the fastest possible running time. We give randomized algorithms that establish an approximation ratio of 1. r for maximum independent set in O∗(exp(O~(n/rlog2r+rlog2r))) time, 2. r for chromatic number in O∗(exp(O~(n/rlogr+rlog2r))) time, 3. (2−1/r) for minimum vertex cover in O∗(exp(n/rΩ(r))) time, and 4. (k−1/r) for minimum k-hypergraph vertex cover in O∗(exp(n/(kr)Ω(kr))) time. (Throughout, O~ and O∗ omit polyloglog(r) and factors polynomial in the input size, respectively.) The best known time bounds for all problems were O∗(2n/r) (Bourgeois et al. in Discret Appl Math 159(17):1954–1970, 2011; Cygan et al. in Exponential-time approximation of hard problems, 2008). For maximum independent set and chromatic number, these bounds were complemented by exp(n1−o(1)/r1+o(1)) lower bounds (under the Exponential Time Hypothesis (ETH)) (Chalermsook et al. in Foundations of computer science, FOCS, pp. 370–379, 2013; Laekhanukit in Inapproximability of combinatorial problems in subexponential-time. Ph.D. thesis, 2014). Our results show that the naturally-looking O∗(2n/r) bounds are not tight for all these problems. The key to these results is a sparsification procedure that reduces a problem to a bounded-degree variant, allowing the use of approximation algorithms for bounded-degree graphs. To obtain the first two results, we introduce a new randomized branching rule. Finally, we show a connection between PCP parameters and exponential-time approximation algorithms. This connection together with our independent set algorithm refute the possibility to overly reduce the size of Chan’s PCP (Chan in J. ACM 63(3):27:1–27:32, 2016). It also implies that a (significant) improvement over our result will refute the gap-ETH conjecture (Dinur in Electron Colloq Comput Complex (ECCC) 23:128, 2016; Manurangsi and Raghavendra in A birthday repetition theorem and complexity of approximating dense CSPs, 2016)

    The Complexity of Finding Dense Subgraphs in Graphs with Large Cliques

    Get PDF
    The GapDensest-k-Subgraph(d) problem (GapDkS(d)) is defined as follows: given a graph G and parameters k,d, distinguish between the case that G contains a k-clique, and the case that every k-subgraph of G has density at most d. GapDkS(d) is a natural relaxation of the standard Clique problem, which is known to be NP-complete. For d very close to 1, the GapDkS(d) problem is equivalent to the Clique problem, and when d is very close to 0 the GapDkS(d) problem can easily be solved in polynomial time. However, despite much work on both the algorithmic and hardness front, the exact k and d parameter values for which GapDkS(d) can be solved in polynomial time are still unknown. In particular, the best polynomial-time algorithms can solve GapDkS(d) when d is an inverse polynomial in the number of vertices n, but there have been no NP-hardness results beyond the trivial result. This thesis attempts to understand the GapDkS(d) problem better by studying the case when k is restricted to be linear in n (where n is the number of vertices in G). In particular, we survey the GapDkS(d) algorithms and hardness results that can be best applied to this restriction in an attempt to determine the threshold for when the problem becomes NP-hard. With some modifications to the algorithms and proofs, we produce algorithms and hardness results for the GapDkS(d) problem with k linear in n. In addition, we study the connection between GapDkS(d) and MaxClique, and show that despite having strong hardness results for MaxClique, reductions from MaxClique do not give strong hardness bounds for GapDkS(d)

    Polynomial-time Approximation of Independent Set Parameterized by Treewidth

    Full text link
    We prove the following result about approximating the maximum independent set in a graph. Informally, we show that any approximation algorithm with a ``non-trivial'' approximation ratio (as a function of the number of vertices of the input graph GG) can be turned into an approximation algorithm achieving almost the same ratio, albeit as a function of the treewidth of GG. More formally, we prove that for any function ff, the existence of a polynomial time (n/f(n))(n/f(n))-approximation algorithm yields the existence of a polynomial time O(twlogf(tw)/f(tw))O(tw \cdot\log{f(tw)}/f(tw))-approximation algorithm, where nn and twtw denote the number of vertices and the width of a given tree decomposition of the input graph. By pipelining our result with the state-of-the-art O(n(loglogn)2/log3n)O(n \cdot (\log \log n)^2/\log^3 n)-approximation algorithm by Feige (2004), this implies an O(tw(loglogtw)3/log3tw)O(tw \cdot (\log \log tw)^3/\log^3 tw)-approximation algorithm.Comment: To appear in the 31st Annual European Symposium on Algorithms (ESA 2023

    From Gap-ETH to FPT-Inapproximability: Clique, Dominating Set, and More

    Full text link
    We consider questions that arise from the intersection between the areas of polynomial-time approximation algorithms, subexponential-time algorithms, and fixed-parameter tractable algorithms. The questions, which have been asked several times (e.g., [Marx08, FGMS12, DF13]), are whether there is a non-trivial FPT-approximation algorithm for the Maximum Clique (Clique) and Minimum Dominating Set (DomSet) problems parameterized by the size of the optimal solution. In particular, letting OPT\text{OPT} be the optimum and NN be the size of the input, is there an algorithm that runs in t(OPT)poly(N)t(\text{OPT})\text{poly}(N) time and outputs a solution of size f(OPT)f(\text{OPT}), for any functions tt and ff that are independent of NN (for Clique, we want f(OPT)=ω(1)f(\text{OPT})=\omega(1))? In this paper, we show that both Clique and DomSet admit no non-trivial FPT-approximation algorithm, i.e., there is no o(OPT)o(\text{OPT})-FPT-approximation algorithm for Clique and no f(OPT)f(\text{OPT})-FPT-approximation algorithm for DomSet, for any function ff (e.g., this holds even if ff is the Ackermann function). In fact, our results imply something even stronger: The best way to solve Clique and DomSet, even approximately, is to essentially enumerate all possibilities. Our results hold under the Gap Exponential Time Hypothesis (Gap-ETH) [Dinur16, MR16], which states that no 2o(n)2^{o(n)}-time algorithm can distinguish between a satisfiable 3SAT formula and one which is not even (1ϵ)(1 - \epsilon)-satisfiable for some constant ϵ>0\epsilon > 0. Besides Clique and DomSet, we also rule out non-trivial FPT-approximation for Maximum Balanced Biclique, Maximum Subgraphs with Hereditary Properties, and Maximum Induced Matching in bipartite graphs. Additionally, we rule out ko(1)k^{o(1)}-FPT-approximation algorithm for Densest kk-Subgraph although this ratio does not yet match the trivial O(k)O(k)-approximation algorithm.Comment: 43 pages. To appear in FOCS'1

    Polynomial-Time Approximation of Independent Set Parameterized by Treewidth

    Get PDF

    Even the Easiest(?) Graph Coloring Problem Is Not Easy in Streaming!

    Get PDF
    We study a graph coloring problem that is otherwise easy in the RAM model but becomes quite non-trivial in the one-pass streaming model. In contrast to previous graph coloring problems in streaming that try to find an assignment of colors to vertices, our main work is on estimating the number of conflicting or monochromatic edges given a coloring function that is streaming along with the graph; we call the problem Conflict-Est. The coloring function on a vertex can be read or accessed only when the vertex is revealed in the stream. If we need the color on a vertex that has streamed past, then that color, along with its vertex, has to be stored explicitly. We provide algorithms for a graph that is streaming in different variants of the vertex arrival in one-pass streaming model, viz. the Vertex Arrival (VA), Vertex Arrival With Degree Oracle (VAdeg), Vertex Arrival in Random Order (VArand) models, with special focus on the random order model. We also provide matching lower bounds for most of the cases. The mainstay of our work is in showing that the properties of a random order stream can be exploited to design efficient streaming algorithms for estimating the number of monochromatic edges. We have also obtained a lower bound, though not matching the upper bound, for the random order model. Among all the three models vis-a-vis this problem, we can show a clear separation of power in favor of the VArand model
    corecore