3,651 research outputs found

    Approximating the Held-Karp Bound for Metric TSP in Nearly Linear Time

    Full text link
    We give a nearly linear time randomized approximation scheme for the Held-Karp bound [Held and Karp, 1970] for metric TSP. Formally, given an undirected edge-weighted graph GG on mm edges and Ͼ>0\epsilon > 0, the algorithm outputs in O(mlog⁥4n/Ͼ2)O(m \log^4n /\epsilon^2) time, with high probability, a (1+Ͼ)(1+\epsilon)-approximation to the Held-Karp bound on the metric TSP instance induced by the shortest path metric on GG. The algorithm can also be used to output a corresponding solution to the Subtour Elimination LP. We substantially improve upon the O(m2log⁥2(m)/Ͼ2)O(m^2 \log^2(m)/\epsilon^2) running time achieved previously by Garg and Khandekar. The LP solution can be used to obtain a fast randomized (32+Ͼ)\big(\frac{3}{2} + \epsilon\big)-approximation for metric TSP which improves upon the running time of previous implementations of Christofides' algorithm

    Maximum weight cycle packing in directed graphs, with application to kidney exchange programs

    Get PDF
    Centralized matching programs have been established in several countries to organize kidney exchanges between incompatible patient-donor pairs. At the heart of these programs are algorithms to solve kidney exchange problems, which can be modelled as cycle packing problems in a directed graph, involving cycles of length 2, 3, or even longer. Usually, the goal is to maximize the number of transplants, but sometimes the total benefit is maximized by considering the differences between suitable kidneys. These problems correspond to computing cycle packings of maximum size or maximum weight in directed graphs. Here we prove the APX-completeness of the problem of finding a maximum size exchange involving only 2-cycles and 3-cycles. We also present an approximation algorithm and an exact algorithm for the problem of finding a maximum weight exchange involving cycles of bounded length. The exact algorithm has been used to provide optimal solutions to real kidney exchange problems arising from the National Matching Scheme for Paired Donation run by NHS Blood and Transplant, and we describe practical experience based on this collaboration

    Squarepants in a Tree: Sum of Subtree Clustering and Hyperbolic Pants Decomposition

    Full text link
    We provide efficient constant factor approximation algorithms for the problems of finding a hierarchical clustering of a point set in any metric space, minimizing the sum of minimimum spanning tree lengths within each cluster, and in the hyperbolic or Euclidean planes, minimizing the sum of cluster perimeters. Our algorithms for the hyperbolic and Euclidean planes can also be used to provide a pants decomposition, that is, a set of disjoint simple closed curves partitioning the plane minus the input points into subsets with exactly three boundary components, with approximately minimum total length. In the Euclidean case, these curves are squares; in the hyperbolic case, they combine our Euclidean square pants decomposition with our tree clustering method for general metric spaces.Comment: 22 pages, 14 figures. This version replaces the proof of what is now Lemma 5.2, as the previous proof was erroneou

    Walking Through Waypoints

    Full text link
    We initiate the study of a fundamental combinatorial problem: Given a capacitated graph G=(V,E)G=(V,E), find a shortest walk ("route") from a source s∈Vs\in V to a destination t∈Vt\in V that includes all vertices specified by a set W⊆V\mathscr{W}\subseteq V: the \emph{waypoints}. This waypoint routing problem finds immediate applications in the context of modern networked distributed systems. Our main contribution is an exact polynomial-time algorithm for graphs of bounded treewidth. We also show that if the number of waypoints is logarithmically bounded, exact polynomial-time algorithms exist even for general graphs. Our two algorithms provide an almost complete characterization of what can be solved exactly in polynomial-time: we show that more general problems (e.g., on grid graphs of maximum degree 3, with slightly more waypoints) are computationally intractable

    Lossy Kernelization

    Get PDF
    In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions combine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size α\alpha-approximate kernel. Loosely speaking, a polynomial size α\alpha-approximate kernel is a polynomial time pre-processing algorithm that takes as input an instance (I,k)(I,k) to a parameterized problem, and outputs another instance (I′,k′)(I',k') to the same problem, such that ∣I′∣+k′≤kO(1)|I'|+k' \leq k^{O(1)}. Additionally, for every c≥1c \geq 1, a cc-approximate solution s′s' to the pre-processed instance (I′,k′)(I',k') can be turned in polynomial time into a (c⋅α)(c \cdot \alpha)-approximate solution ss to the original instance (I,k)(I,k). Our main technical contribution are α\alpha-approximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless NP⊆coNP/polyNP \subseteq coNP/poly. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an α\alpha-approximate kernel of polynomial size, for any α≥1\alpha \geq 1, unless NP⊆coNP/polyNP \subseteq coNP/poly. In order to prove this lower bound we need to combine in a non-trivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximationComment: 58 pages. Version 2 contain new results: PSAKS for Cycle Packing and approximate kernel lower bounds for Set Cover and Hitting Set parameterized by universe siz

    Distributed Connectivity Decomposition

    Full text link
    We present time-efficient distributed algorithms for decomposing graphs with large edge or vertex connectivity into multiple spanning or dominating trees, respectively. As their primary applications, these decompositions allow us to achieve information flow with size close to the connectivity by parallelizing it along the trees. More specifically, our distributed decomposition algorithms are as follows: (I) A decomposition of each undirected graph with vertex-connectivity kk into (fractionally) vertex-disjoint weighted dominating trees with total weight Ω(klog⁡n)\Omega(\frac{k}{\log n}), in O~(D+n)\widetilde{O}(D+\sqrt{n}) rounds. (II) A decomposition of each undirected graph with edge-connectivity λ\lambda into (fractionally) edge-disjoint weighted spanning trees with total weight ⌈λ−12⌉(1−ε)\lceil\frac{\lambda-1}{2}\rceil(1-\varepsilon), in O~(D+nλ)\widetilde{O}(D+\sqrt{n\lambda}) rounds. We also show round complexity lower bounds of Ω~(D+nk)\tilde{\Omega}(D+\sqrt{\frac{n}{k}}) and Ω~(D+nλ)\tilde{\Omega}(D+\sqrt{\frac{n}{\lambda}}) for the above two decompositions, using techniques of [Das Sarma et al., STOC'11]. Moreover, our vertex-connectivity decomposition extends to centralized algorithms and improves the time complexity of [Censor-Hillel et al., SODA'14] from O(n3)O(n^3) to near-optimal O~(m)\tilde{O}(m). As corollaries, we also get distributed oblivious routing broadcast with O(1)O(1)-competitive edge-congestion and O(log⁡n)O(\log n)-competitive vertex-congestion. Furthermore, the vertex connectivity decomposition leads to near-time-optimal O(log⁡n)O(\log n)-approximation of vertex connectivity: centralized O~(m)\widetilde{O}(m) and distributed O~(D+n)\tilde{O}(D+\sqrt{n}). The former moves toward the 1974 conjecture of Aho, Hopcroft, and Ullman postulating an O(m)O(m) centralized exact algorithm while the latter is the first distributed vertex connectivity approximation

    General Bounds for Incremental Maximization

    Full text link
    We propose a theoretical framework to capture incremental solutions to cardinality constrained maximization problems. The defining characteristic of our framework is that the cardinality/support of the solution is bounded by a value k∈Nk\in\mathbb{N} that grows over time, and we allow the solution to be extended one element at a time. We investigate the best-possible competitive ratio of such an incremental solution, i.e., the worst ratio over all kk between the incremental solution after kk steps and an optimum solution of cardinality kk. We define a large class of problems that contains many important cardinality constrained maximization problems like maximum matching, knapsack, and packing/covering problems. We provide a general 2.6182.618-competitive incremental algorithm for this class of problems, and show that no algorithm can have competitive ratio below 2.182.18 in general. In the second part of the paper, we focus on the inherently incremental greedy algorithm that increases the objective value as much as possible in each step. This algorithm is known to be 1.581.58-competitive for submodular objective functions, but it has unbounded competitive ratio for the class of incremental problems mentioned above. We define a relaxed submodularity condition for the objective function, capturing problems like maximum (weighted) (bb-)matching and a variant of the maximum flow problem. We show that the greedy algorithm has competitive ratio (exactly) 2.3132.313 for the class of problems that satisfy this relaxed submodularity condition. Note that our upper bounds on the competitive ratios translate to approximation ratios for the underlying cardinality constrained problems.Comment: fixed typo

    Efficient Approximation Algorithms for Multi-Antennae Largest Weight Data Retrieval

    Full text link
    In a mobile network, wireless data broadcast over mm channels (frequencies) is a powerful means for distributed dissemination of data to clients who access the channels through multi-antennae equipped on their mobile devices. The δ\delta-antennae largest weight data retrieval (δ\deltaALWDR) problem is to compute a schedule for downloading a subset of data items that has a maximum total weight using δ\delta antennae in a given time interval. In this paper, we propose a ratio 1−1e−ϵ1-\frac{1}{e}-\epsilon approximation algorithm for the δ\delta-antennae largest weight data retrieval (δ\deltaALWDR) problem that has the same ratio as the known result but a significantly improved time complexity of O(21ϵ1ϵm7T3.5L)O(2^{\frac{1}{\epsilon}}\frac{1}{\epsilon}m^{7}T^{3.5}L) from O(ϵ3.5m3.5ϵT3.5L)O(\epsilon^{3.5}m^{\frac{3.5}{\epsilon}}T^{3.5}L) when δ=1\delta=1 \cite{lu2014data}. To our knowledge, our algorithm represents the first ratio 1−1e−ϵ1-\frac{1}{e}-\epsilon approximation solution to δ\deltaALWDR for the general case of arbitrary δ\delta. To achieve this, we first give a ratio 1−1e1-\frac{1}{e} algorithm for the γ\gamma-separated δ\deltaALWDR (δ\deltaAγ\gammaLWDR) with runtime O(m7T3.5L)O(m^{7}T^{3.5}L), under the assumption that every data item appears at most once in each segment of δ\deltaAγ\gammaLWDR, for any input of maximum length LL on mm channels in TT time slots. Then, we show that we can retain the same ratio for δ\deltaAγ\gammaLWDR without this assumption at the cost of increased time complexity to O(2γm7T3.5L)O(2^{\gamma}m^{7}T^{3.5}L). This result immediately yields an approximation solution of same ratio and time complexity for δ\deltaALWDR, presenting a significant improvement of the known time complexity of ratio 1−1e−ϵ1-\frac{1}{e}-\epsilon approximation to the problem
    • …
    corecore