338 research outputs found

    Feedback Vertex Set Inspired Kernel for Chordal Vertex Deletion

    Full text link
    Given a graph GG and a parameter kk, the Chordal Vertex Deletion (CVD) problem asks whether there exists a subset UV(G)U\subseteq V(G) of size at most kk that hits all induced cycles of size at least 4. The existence of a polynomial kernel for CVD was a well-known open problem in the field of Parameterized Complexity. Recently, Jansen and Pilipczuk resolved this question affirmatively by designing a polynomial kernel for CVD of size O(k161log58k)O(k^{161}\log^{58}k), and asked whether one can design a kernel of size O(k10)O(k^{10}). While we do not completely resolve this question, we design a significantly smaller kernel of size O(k12log10k)O(k^{12}\log^{10}k), inspired by the O(k2)O(k^2)-size kernel for Feedback Vertex Set. Furthermore, we introduce the notion of the independence degree of a vertex, which is our main conceptual contribution

    An Approximate Kernel for Connected Feedback Vertex Set

    Get PDF
    The Feedback Vertex Set problem is a fundamental computational problem which has been the subject of intensive study in various domains of algorithmics. In this problem, one is given an undirected graph G and an integer k as input. The objective is to determine whether at most k vertices can be deleted from G such that the resulting graph is acyclic. The study of preprocessing algorithms for this problem has a long and rich history, culminating in the quadratic kernelization of Thomasse [SODA 2010]. However, it is known that when the solution is required to induce a connected subgraph (such a set is called a connected feedback vertex set), a polynomial kernelization is unlikely to exist and the problem is NP-hard to approximate below a factor of 2 (assuming the Unique Games Conjecture). In this paper, we show that if one is interested in only preserving approximate solutions (even of quality arbitrarily close to the optimum), then there is a drastic improvement in our ability to preprocess this problem. Specifically, we prove that for every fixed 0<epsilon<1, graph G, and k in N, the following holds: There is a polynomial time computable graph G\u27 of size k^O(1) such that for every c >= 1, any c-approximate connected feedback vertex set of G\u27 of size at most k is a c * (1+epsilon)-approximate connected feedback vertex set of G. Our result adds to the set of approximate kernelization algorithms introduced by Lokshtanov et al. [STOC 2017]. As a consequence of our main result, we show that Connected Feedback Vertex Set can be approximated within a factor min{OPT^O(1),n^(1-delta)} in polynomial time for some delta>0

    Data Reductions and Combinatorial Bounds for Improved Approximation Algorithms

    Full text link
    Kernelization algorithms in the context of Parameterized Complexity are often based on a combination of reduction rules and combinatorial insights. We will expose in this paper a similar strategy for obtaining polynomial-time approximation algorithms. Our method features the use of approximation-preserving reductions, akin to the notion of parameterized reductions. We exemplify this method to obtain the currently best approximation algorithms for \textsc{Harmless Set}, \textsc{Differential} and \textsc{Multiple Nonblocker}, all of them can be considered in the context of securing networks or information propagation

    A Linear Kernel for Planar Total Dominating Set

    Full text link
    A total dominating set of a graph G=(V,E)G=(V,E) is a subset DVD \subseteq V such that every vertex in VV is adjacent to some vertex in DD. Finding a total dominating set of minimum size is NP-hard on planar graphs and W[2]-complete on general graphs when parameterized by the solution size. By the meta-theorem of Bodlaender et al. [J. ACM, 2016], there exists a linear kernel for Total Dominating Set on graphs of bounded genus. Nevertheless, it is not clear how such a kernel can be effectively constructed, and how to obtain explicit reduction rules with reasonably small constants. Following the approach of Alber et al. [J. ACM, 2004], we provide an explicit kernel for Total Dominating Set on planar graphs with at most 410k410k vertices, where kk is the size of the solution. This result complements several known constructive linear kernels on planar graphs for other domination problems such as Dominating Set, Edge Dominating Set, Efficient Dominating Set, Connected Dominating Set, or Red-Blue Dominating Set.Comment: 33 pages, 13 figure

    Scalable Nonlinear Embeddings for Semantic Category-based Image Retrieval

    Full text link
    We propose a novel algorithm for the task of supervised discriminative distance learning by nonlinearly embedding vectors into a low dimensional Euclidean space. We work in the challenging setting where supervision is with constraints on similar and dissimilar pairs while training. The proposed method is derived by an approximate kernelization of a linear Mahalanobis-like distance metric learning algorithm and can also be seen as a kernel neural network. The number of model parameters and test time evaluation complexity of the proposed method are O(dD) where D is the dimensionality of the input features and d is the dimension of the projection space - this is in contrast to the usual kernelization methods as, unlike them, the complexity does not scale linearly with the number of training examples. We propose a stochastic gradient based learning algorithm which makes the method scalable (w.r.t. the number of training examples), while being nonlinear. We train the method with up to half a million training pairs of 4096 dimensional CNN features. We give empirical comparisons with relevant baselines on seven challenging datasets for the task of low dimensional semantic category based image retrieval.Comment: ICCV 2015 preprin
    corecore