251 research outputs found

    Twin-Width and Polynomial Kernels

    Get PDF
    We study the existence of polynomial kernels for parameterized problems without a polynomial kernel on general graphs, when restricted to graphs of bounded twin-width. It was previously observed in [Bonnet et al., ICALP\u2721] that the problem k-Independent Set allows no polynomial kernel on graph of bounded twin-width by a very simple argument, which extends to several other problems such as k-Independent Dominating Set, k-Path, k-Induced Path, k-Induced Matching. In this work, we examine the k-Dominating Set and variants of k-Vertex Cover for the existence of polynomial kernels. As a main result, we show that k-Dominating Set does not admit a polynomial kernel on graphs of twin-width at most 4 under a standard complexity-theoretic assumption. The reduction is intricate, especially due to the effort to bring the twin-width down to 4, and it can be tweaked to work for Connected k-Dominating Set and Total k-Dominating Set with a slightly worse bound on the twin-width. On the positive side, we obtain a simple quadratic vertex kernel for Connected k-Vertex Cover and Capacitated k-Vertex Cover on graphs of bounded twin-width. These kernels rely on that graphs of bounded twin-width have Vapnik-Chervonenkis (VC) density 1, that is, for any vertex set X, the number of distinct neighborhoods in X is at most c?|X|, where c is a constant depending only on the twin-width. Interestingly the kernel applies to any graph class of VC density 1, and does not require a witness sequence. We also present a more intricate O(k^{1.5}) vertex kernel for Connected k-Vertex Cover. Finally we show that deciding if a graph has twin-width at most 1 can be done in polynomial time, and observe that most graph optimization/decision problems can be solved in polynomial time on graphs of twin-width at most 1

    Data Reductions and Combinatorial Bounds for Improved Approximation Algorithms

    Full text link
    Kernelization algorithms in the context of Parameterized Complexity are often based on a combination of reduction rules and combinatorial insights. We will expose in this paper a similar strategy for obtaining polynomial-time approximation algorithms. Our method features the use of approximation-preserving reductions, akin to the notion of parameterized reductions. We exemplify this method to obtain the currently best approximation algorithms for \textsc{Harmless Set}, \textsc{Differential} and \textsc{Multiple Nonblocker}, all of them can be considered in the context of securing networks or information propagation

    Several fitness functions and entanglement gates in quantum kernel generation

    Full text link
    Quantum machine learning (QML) represents a promising frontier in the realm of quantum technologies. In this pursuit of quantum advantage, the quantum kernel method for support vector machine has emerged as a powerful approach. Entanglement, a fundamental concept in quantum mechanics, assumes a central role in quantum computing. In this paper, we study the necessities of entanglement gates in the quantum kernel methods. We present several fitness functions for a multi-objective genetic algorithm that simultaneously maximizes classification accuracy while minimizing both the local and non-local gate costs of the quantum feature map's circuit. We conduct comparisons with classical classifiers to gain insights into the benefits of employing entanglement gates. Surprisingly, our experiments reveal that the optimal configuration of quantum circuits for the quantum kernel method incorporates a proportional number of non-local gates for entanglement, contrary to previous literature where non-local gates were largely suppressed. Furthermore, we demonstrate that the separability indexes of data can be effectively leveraged to determine the number of non-local gates required for the quantum support vector machine's feature maps. This insight can significantly aid in selecting appropriate parameters, such as the entanglement parameter, in various quantum programming packages like https://qiskit.org/ based on data analysis. Our findings offer valuable guidance for enhancing the efficiency and accuracy of quantum machine learning algorith

    Parameterized algorithms of fundamental NP-hard problems: a survey

    Get PDF
    Parameterized computation theory has developed rapidly over the last two decades. In theoretical computer science, it has attracted considerable attention for its theoretical value and significant guidance in many practical applications. We give an overview on parameterized algorithms for some fundamental NP-hard problems, including MaxSAT, Maximum Internal Spanning Trees, Maximum Internal Out-Branching, Planar (Connected) Dominating Set, Feedback Vertex Set, Hyperplane Cover, Vertex Cover, Packing and Matching problems. All of these problems have been widely applied in various areas, such as Internet of Things, Wireless Sensor Networks, Artificial Intelligence, Bioinformatics, Big Data, and so on. In this paper, we are focused on the algorithms’ main idea and algorithmic techniques, and omit the details of them

    A Survey on Approximation in Parameterized Complexity: Hardness and Algorithms

    Get PDF
    Parameterization and approximation are two popular ways of coping with NP-hard problems. More recently, the two have also been combined to derive many interesting results. We survey developments in the area both from the algorithmic and hardness perspectives, with emphasis on new techniques and potential future research directions

    On the Parameterized Complexity of the Expected Coverage Problem

    Get PDF
    The MAXIMUM COVERING LOCATION PROBLEM (MCLP) is a well-studied problem in the field of operations research. Given a network with positive or negative demands on the nodes, a positive integer k, the MCLP seeks to find k potential facility centers in the network such that the neighborhood coverage is maximized. We study the variant of MCLP where edges of the network are subject to random failures due to some disruptive events. One of the popular models capturing the unreliable nature of the facility location is the linear reliability ordering (LRO) model. In this model, with every edge e of the network, we associate its survival probability 0 ≤ pe ≤ 1, or equivalently, its failure probability 1 − pe. The failure correlation in LRO is the following: If an edge e fails then every edge e′ with pe′≤pe surely fails. The task is to identify the positions of k facilities that maximize the expected coverage. We refer to this problem as EXPECTED COVERAGE problem. We study the EXPECTED COVERAGE problem from the parameterized complexity perspective and obtain the following results. 1. For the parameter pathwidth, we show that the EXPECTED COVERAGE problem is W[1]-hard. We find this result a bit surprising, because the variant of the problem with non-negative demands is fixed-parameter tractable (FPT) parameterized by the treewidth of the input graph. 2. We complement the lower bound by the proof that EXPECTED COVERAGE is FPT being parameterized by the treewidth and the maximum vertex degree. We give an algorithm that solves the problem in time 2O(twlogΔ)nO(1), where tw is the treewidth, Δ is the maximum vertex degree, and n the number of vertices of the input graph. In particular, since Δ ≤ n, it means the problem is solvable in time nO(tw), that is, is in XP parameterized by treewidth.publishedVersio

    Resiliency Policies in Access Control Revisited

    Get PDF

    Spectral tensor-train decomposition

    Get PDF
    The accurate approximation of high-dimensional functions is an essential task in uncertainty quantification and many other fields. We propose a new function approximation scheme based on a spectral extension of the tensor-train (TT) decomposition. We first define a functional version of the TT decomposition and analyze its properties. We obtain results on the convergence of the decomposition, revealing links between the regularity of the function, the dimension of the input space, and the TT ranks. We also show that the regularity of the target function is preserved by the univariate functions (i.e., the "cores") comprising the functional TT decomposition. This result motivates an approximation scheme employing polynomial approximations of the cores. For functions with appropriate regularity, the resulting \textit{spectral tensor-train decomposition} combines the favorable dimension-scaling of the TT decomposition with the spectral convergence rate of polynomial approximations, yielding efficient and accurate surrogates for high-dimensional functions. To construct these decompositions, we use the sampling algorithm \texttt{TT-DMRG-cross} to obtain the TT decomposition of tensors resulting from suitable discretizations of the target function. We assess the performance of the method on a range of numerical examples: a modifed set of Genz functions with dimension up to 100100, and functions with mixed Fourier modes or with local features. We observe significant improvements in performance over an anisotropic adaptive Smolyak approach. The method is also used to approximate the solution of an elliptic PDE with random input data. The open source software and examples presented in this work are available online.Comment: 33 pages, 19 figure

    Parameterized Algorithms for Maximum Cut with Connectivity Constraints

    Get PDF
    We study two variants of Maximum Cut, which we call Connected Maximum Cut and Maximum Minimal Cut, in this paper. In these problems, given an unweighted graph, the goal is to compute a maximum cut satisfying some connectivity requirements. Both problems are known to be NP-complete even on planar graphs whereas Maximum Cut on planar graphs is solvable in polynomial time. We first show that these problems are NP-complete even on planar bipartite graphs and split graphs. Then we give parameterized algorithms using graph parameters such as clique-width, tree-width, and twin-cover number. Finally, we obtain FPT algorithms with respect to the solution size
    • …
    corecore