608 research outputs found

    Approximating k-Forest with Resource Augmentation: A Primal-Dual Approach

    Full text link
    In this paper, we study the kk-forest problem in the model of resource augmentation. In the kk-forest problem, given an edge-weighted graph G(V,E)G(V,E), a parameter kk, and a set of mm demand pairs V×V\subseteq V \times V, the objective is to construct a minimum-cost subgraph that connects at least kk demands. The problem is hard to approximate---the best-known approximation ratio is O(min{n,k})O(\min\{\sqrt{n}, \sqrt{k}\}). Furthermore, kk-forest is as hard to approximate as the notoriously-hard densest kk-subgraph problem. While the kk-forest problem is hard to approximate in the worst-case, we show that with the use of resource augmentation, we can efficiently approximate it up to a constant factor. First, we restate the problem in terms of the number of demands that are {\em not} connected. In particular, the objective of the kk-forest problem can be viewed as to remove at most mkm-k demands and find a minimum-cost subgraph that connects the remaining demands. We use this perspective of the problem to explain the performance of our algorithm (in terms of the augmentation) in a more intuitive way. Specifically, we present a polynomial-time algorithm for the kk-forest problem that, for every ϵ>0\epsilon>0, removes at most mkm-k demands and has cost no more than O(1/ϵ2)O(1/\epsilon^{2}) times the cost of an optimal algorithm that removes at most (1ϵ)(mk)(1-\epsilon)(m-k) demands

    A Much Faster Algorithm for Finding a Maximum Clique

    Get PDF
    We present improvements to a branch-and-bound maximumclique-finding algorithm MCS (WALCOM 2010, LNCS 5942, pp. 191–203) that was shown to be fast. First, we employ an efficient approximation algorithm for finding a maximum clique. Second, we make use of appropriate sorting of vertices only near the root of the search tree. Third, we employ a lightened approximate coloring mainly near the leaves of the search tree. A new algorithm obtained from MCS with the above improvements is named MCT. It is shown that MCT is much faster than MCS by extensive computational experiments. In particular, MCT is shown to be faster than MCS for gen400 p0.9 75 and gen400 p0.9 65 by over 328,000 and 77,000 times, respectively

    From the Quantum Approximate Optimization Algorithm to a Quantum Alternating Operator Ansatz

    Full text link
    The next few years will be exciting as prototype universal quantum processors emerge, enabling implementation of a wider variety of algorithms. Of particular interest are quantum heuristics, which require experimentation on quantum hardware for their evaluation, and which have the potential to significantly expand the breadth of quantum computing applications. A leading candidate is Farhi et al.'s Quantum Approximate Optimization Algorithm, which alternates between applying a cost-function-based Hamiltonian and a mixing Hamiltonian. Here, we extend this framework to allow alternation between more general families of operators. The essence of this extension, the Quantum Alternating Operator Ansatz, is the consideration of general parametrized families of unitaries rather than only those corresponding to the time-evolution under a fixed local Hamiltonian for a time specified by the parameter. This ansatz supports the representation of a larger, and potentially more useful, set of states than the original formulation, with potential long-term impact on a broad array of application areas. For cases that call for mixing only within a desired subspace, refocusing on unitaries rather than Hamiltonians enables more efficiently implementable mixers than was possible in the original framework. Such mixers are particularly useful for optimization problems with hard constraints that must always be satisfied, defining a feasible subspace, and soft constraints whose violation we wish to minimize. More efficient implementation enables earlier experimental exploration of an alternating operator approach to a wide variety of approximate optimization, exact optimization, and sampling problems. Here, we introduce the Quantum Alternating Operator Ansatz, lay out design criteria for mixing operators, detail mappings for eight problems, and provide brief descriptions of mappings for diverse problems.Comment: 51 pages, 2 figures. Revised to match journal pape

    Efficient Subgraph Similarity Search on Large Probabilistic Graph Databases

    Full text link
    Many studies have been conducted on seeking the efficient solution for subgraph similarity search over certain (deterministic) graphs due to its wide application in many fields, including bioinformatics, social network analysis, and Resource Description Framework (RDF) data management. All these works assume that the underlying data are certain. However, in reality, graphs are often noisy and uncertain due to various factors, such as errors in data extraction, inconsistencies in data integration, and privacy preserving purposes. Therefore, in this paper, we study subgraph similarity search on large probabilistic graph databases. Different from previous works assuming that edges in an uncertain graph are independent of each other, we study the uncertain graphs where edges' occurrences are correlated. We formally prove that subgraph similarity search over probabilistic graphs is #P-complete, thus, we employ a filter-and-verify framework to speed up the search. In the filtering phase,we develop tight lower and upper bounds of subgraph similarity probability based on a probabilistic matrix index, PMI. PMI is composed of discriminative subgraph features associated with tight lower and upper bounds of subgraph isomorphism probability. Based on PMI, we can sort out a large number of probabilistic graphs and maximize the pruning capability. During the verification phase, we develop an efficient sampling algorithm to validate the remaining candidates. The efficiency of our proposed solutions has been verified through extensive experiments.Comment: VLDB201

    A branch-and-cut algorithm for the Edge Interdiction Clique Problem

    Get PDF
    Given a graph G and an interdiction budget k∈N, the Edge Interdiction Clique Problem (EICP) asks to find a subset of at most k edges to remove from G so that the size of the maximum clique, in the interdicted graph, is minimized. The EICP belongs to the family of interdiction problems with the aim of reducing the clique number of the graph. The EICP optimal solutions, called optimal interdiction policies, determine the subset of most vital edges of a graph which are crucial for preserving its clique number. We propose a new set-covering-based Integer Linear Programming (ILP) formulation for the EICP with an exponential number of constraints, called the clique-covering inequalities. We design a new branch-and-cut algorithm which is enhanced by a tailored separation procedure and by an effective heuristic initialization phase. Thanks to the new exact algorithm, we manage to solve the EICP in several sets of instances from the literature. Extensive tests show that the new exact algorithm greatly outperforms the state-of-the-art approaches for the EICP
    corecore