5,681 research outputs found

    Lossy Kernelization

    Get PDF
    In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions combine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size α\alpha-approximate kernel. Loosely speaking, a polynomial size α\alpha-approximate kernel is a polynomial time pre-processing algorithm that takes as input an instance (I,k)(I,k) to a parameterized problem, and outputs another instance (I′,k′)(I',k') to the same problem, such that ∣I′∣+k′≤kO(1)|I'|+k' \leq k^{O(1)}. Additionally, for every c≥1c \geq 1, a cc-approximate solution s′s' to the pre-processed instance (I′,k′)(I',k') can be turned in polynomial time into a (c⋅α)(c \cdot \alpha)-approximate solution ss to the original instance (I,k)(I,k). Our main technical contribution are α\alpha-approximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless NP⊆coNP/polyNP \subseteq coNP/poly. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an α\alpha-approximate kernel of polynomial size, for any α≥1\alpha \geq 1, unless NP⊆coNP/polyNP \subseteq coNP/poly. In order to prove this lower bound we need to combine in a non-trivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximationComment: 58 pages. Version 2 contain new results: PSAKS for Cycle Packing and approximate kernel lower bounds for Set Cover and Hitting Set parameterized by universe siz

    The Minimum Wiener Connector

    Full text link
    The Wiener index of a graph is the sum of all pairwise shortest-path distances between its vertices. In this paper we study the novel problem of finding a minimum Wiener connector: given a connected graph G=(V,E)G=(V,E) and a set Q⊆VQ\subseteq V of query vertices, find a subgraph of GG that connects all query vertices and has minimum Wiener index. We show that The Minimum Wiener Connector admits a polynomial-time (albeit impractical) exact algorithm for the special case where the number of query vertices is bounded. We show that in general the problem is NP-hard, and has no PTAS unless P=NP\mathbf{P} = \mathbf{NP}. Our main contribution is a constant-factor approximation algorithm running in time O~(∣Q∣∣E∣)\widetilde{O}(|Q||E|). A thorough experimentation on a large variety of real-world graphs confirms that our method returns smaller and denser solutions than other methods, and does so by adding to the query set QQ a small number of important vertices (i.e., vertices with high centrality).Comment: Published in Proceedings of the 2015 ACM SIGMOD International Conference on Management of Dat

    An Approximate Kernel for Connected Feedback Vertex Set

    Get PDF
    The Feedback Vertex Set problem is a fundamental computational problem which has been the subject of intensive study in various domains of algorithmics. In this problem, one is given an undirected graph G and an integer k as input. The objective is to determine whether at most k vertices can be deleted from G such that the resulting graph is acyclic. The study of preprocessing algorithms for this problem has a long and rich history, culminating in the quadratic kernelization of Thomasse [SODA 2010]. However, it is known that when the solution is required to induce a connected subgraph (such a set is called a connected feedback vertex set), a polynomial kernelization is unlikely to exist and the problem is NP-hard to approximate below a factor of 2 (assuming the Unique Games Conjecture). In this paper, we show that if one is interested in only preserving approximate solutions (even of quality arbitrarily close to the optimum), then there is a drastic improvement in our ability to preprocess this problem. Specifically, we prove that for every fixed 0<epsilon<1, graph G, and k in N, the following holds: There is a polynomial time computable graph G\u27 of size k^O(1) such that for every c >= 1, any c-approximate connected feedback vertex set of G\u27 of size at most k is a c * (1+epsilon)-approximate connected feedback vertex set of G. Our result adds to the set of approximate kernelization algorithms introduced by Lokshtanov et al. [STOC 2017]. As a consequence of our main result, we show that Connected Feedback Vertex Set can be approximated within a factor min{OPT^O(1),n^(1-delta)} in polynomial time for some delta>0

    Approximation Algorithms for Distributionally Robust Stochastic Optimization with Black-Box Distributions

    Full text link
    Two-stage stochastic optimization is a framework for modeling uncertainty, where we have a probability distribution over possible realizations of the data, called scenarios, and decisions are taken in two stages: we make first-stage decisions knowing only the underlying distribution and before a scenario is realized, and may take additional second-stage recourse actions after a scenario is realized. The goal is typically to minimize the total expected cost. A criticism of this model is that the underlying probability distribution is itself often imprecise! To address this, a versatile approach that has been proposed is the {\em distributionally robust 2-stage model}: given a collection of probability distributions, our goal now is to minimize the maximum expected total cost with respect to a distribution in this collection. We provide a framework for designing approximation algorithms in such settings when the collection is a ball around a central distribution and the central distribution is accessed {\em only via a sampling black box}. We first show that one can utilize the {\em sample average approximation} (SAA) method to reduce the problem to the case where the central distribution has {\em polynomial-size} support. We then show how to approximately solve a fractional relaxation of the SAA (i.e., polynomial-scenario central-distribution) problem. By complementing this via LP-rounding algorithms that provide {\em local} (i.e., per-scenario) approximation guarantees, we obtain the {\em first} approximation algorithms for the distributionally robust versions of a variety of discrete-optimization problems including set cover, vertex cover, edge cover, facility location, and Steiner tree, with guarantees that are, except for set cover, within O(1)O(1)-factors of the guarantees known for the deterministic version of the problem

    On trip planning queries in spatial databases

    Full text link
    In this paper we discuss a new type of query in Spatial Databases, called Trip Planning Query (TPQ). Given a set of points P in space, where each point belongs to a category, and given two points s and e, TPQ asks for the best trip that starts at s, passes through exactly one point from each category, and ends at e. An example of a TPQ is when a user wants to visit a set of different places and at the same time minimize the total travelling cost, e.g. what is the shortest travelling plan for me to visit an automobile shop, a CVS pharmacy outlet, and a Best Buy shop along my trip from A to B? The trip planning query is an extension of the well-known TSP problem and therefore is NP-hard. The difficulty of this query lies in the existence of multiple choices for each category. In this paper, we first study fast approximation algorithms for the trip planning query in a metric space, assuming that the data set fits in main memory, and give the theory analysis of their approximation bounds. Then, the trip planning query is examined for data sets that do not fit in main memory and must be stored on disk. For the disk-resident data, we consider two cases. In one case, we assume that the points are located in Euclidean space and indexed with an Rtree. In the other case, we consider the problem of points that lie on the edges of a spatial network (e.g. road network) and the distance between two points is defined using the shortest distance over the network. Finally, we give an experimental evaluation of the proposed algorithms using synthetic data sets generated on real road networks

    On trip planning queries in spatial databases

    Full text link
    In this paper we discuss a new type of query in Spatial Databases, called Trip Planning Query (TPQ). Given a set of points P in space, where each point belongs to a category, and given two points s and e, TPQ asks for the best trip that starts at s, passes through exactly one point from each category, and ends at e. An example of a TPQ is when a user wants to visit a set of different places and at the same time minimize the total travelling cost, e.g. what is the shortest travelling plan for me to visit an automobile shop, a CVS pharmacy outlet, and a Best Buy shop along my trip from A to B? The trip planning query is an extension of the well-known TSP problem and therefore is NP-hard. The difficulty of this query lies in the existence of multiple choices for each category. In this paper, we first study fast approximation algorithms for the trip planning query in a metric space, assuming that the data set fits in main memory, and give the theory analysis of their approximation bounds. Then, the trip planning query is examined for data sets that do not fit in main memory and must be stored on disk. For the disk-resident data, we consider two cases. In one case, we assume that the points are located in Euclidean space and indexed with an Rtree. In the other case, we consider the problem of points that lie on the edges of a spatial network (e.g. road network) and the distance between two points is defined using the shortest distance over the network. Finally, we give an experimental evaluation of the proposed algorithms using synthetic data sets generated on real road networks

    Routing for analog chip designs at NXP Semiconductors

    Get PDF
    During the study week 2011 we worked on the question of how to automate certain aspects of the design of analog chips. Here we focused on the task of connecting different blocks with electrical wiring, which is particularly tedious to do by hand. For digital chips there is a wealth of research available for this, as in this situation the amount of blocks makes it hopeless to do the design by hand. Hence, we set our task to finding solutions that are based on the previous research, as well as being tailored to the specific setting given by NXP. This resulted in an heuristic approach, which we presented at the end of the week in the form of a protoype tool. In this report we give a detailed account of the ideas we used, and describe possibilities to extend the approach
    • …
    corecore