28 research outputs found

    Fault Tolerant Clustering Revisited

    Full text link
    In discrete k-center and k-median clustering, we are given a set of points P in a metric space M, and the task is to output a set C \subseteq ? P, |C| = k, such that the cost of clustering P using C is as small as possible. For k-center, the cost is the furthest a point has to travel to its nearest center, whereas for k-median, the cost is the sum of all point to nearest center distances. In the fault-tolerant versions of these problems, we are given an additional parameter 1 ?\leq \ell \leq ? k, such that when computing the cost of clustering, points are assigned to their \ell-th nearest-neighbor in C, instead of their nearest neighbor. We provide constant factor approximation algorithms for these problems that are both conceptually simple and highly practical from an implementation stand-point

    Robust Fault Tolerant uncapacitated facility location

    Get PDF
    In the uncapacitated facility location problem, given a graph, a set of demands and opening costs, it is required to find a set of facilities R, so as to minimize the sum of the cost of opening the facilities in R and the cost of assigning all node demands to open facilities. This paper concerns the robust fault-tolerant version of the uncapacitated facility location problem (RFTFL). In this problem, one or more facilities might fail, and each demand should be supplied by the closest open facility that did not fail. It is required to find a set of facilities R, so as to minimize the sum of the cost of opening the facilities in R and the cost of assigning all node demands to open facilities that did not fail, after the failure of up to \alpha facilities. We present a polynomial time algorithm that yields a 6.5-approximation for this problem with at most one failure and a 1.5 + 7.5\alpha-approximation for the problem with at most \alpha > 1 failures. We also show that the RFTFL problem is NP-hard even on trees, and even in the case of a single failure

    Matroid and Knapsack Center Problems

    Full text link
    In the classic kk-center problem, we are given a metric graph, and the objective is to open kk nodes as centers such that the maximum distance from any vertex to its closest center is minimized. In this paper, we consider two important generalizations of kk-center, the matroid center problem and the knapsack center problem. Both problems are motivated by recent content distribution network applications. Our contributions can be summarized as follows: 1. We consider the matroid center problem in which the centers are required to form an independent set of a given matroid. We show this problem is NP-hard even on a line. We present a 3-approximation algorithm for the problem on general metrics. We also consider the outlier version of the problem where a given number of vertices can be excluded as the outliers from the solution. We present a 7-approximation for the outlier version. 2. We consider the (multi-)knapsack center problem in which the centers are required to satisfy one (or more) knapsack constraint(s). It is known that the knapsack center problem with a single knapsack constraint admits a 3-approximation. However, when there are at least two knapsack constraints, we show this problem is not approximable at all. To complement the hardness result, we present a polynomial time algorithm that gives a 3-approximate solution such that one knapsack constraint is satisfied and the others may be violated by at most a factor of 1+ϵ1+\epsilon. We also obtain a 3-approximation for the outlier version that may violate the knapsack constraint by 1+ϵ1+\epsilon.Comment: A preliminary version of this paper is accepted to IPCO 201

    Fault-Tolerant Hotelling Games

    Full text link
    The nn-player Hotelling game calls for each player to choose a point on the line segment, so as to maximize the size of his Voronoi cell. This paper studies fault-tolerant versions of the Hotelling game. Two fault models are studied: line faults and player faults. The first model assumes that the environment is prone to failure: with some probability, a disconnection occurs at a random point on the line, splitting it into two separate segments and modifying each player's Voronoi cell accordingly. A complete characterization of the Nash equilibria of this variant is provided for every nn. Additionally, a one to one correspondence is shown between equilibria of this variant and of the Hotelling game with no faults. The second fault model assumes the players are prone to failure: each player is removed from the game with i.i.d. probability, changing the payoffs of the remaining players accordingly. It is shown that for n≥3n \geq 3 this variant of the game has no Nash equilibria

    Structural Parameters, Tight Bounds, and Approximation for (k,r)-Center

    Get PDF
    In (k,r)-Center we are given a (possibly edge-weighted) graph and are asked to select at most k vertices (centers), so that all other vertices are at distance at most r from a center. In this paper we provide a number of tight fine-grained bounds on the complexity of this problem with respect to various standard graph parameters. Specifically: - For any r>=1, we show an algorithm that solves the problem in O*((3r+1)^cw) time, where cw is the clique-width of the input graph, as well as a tight SETH lower bound matching this algorithm\u27s performance. As a corollary, for r=1, this closes the gap that previously existed on the complexity of Dominating Set parameterized by cw. - We strengthen previously known FPT lower bounds, by showing that (k,r)-Center is W[1]-hard parameterized by the input graph\u27s vertex cover (if edge weights are allowed), or feedback vertex set, even if k is an additional parameter. Our reductions imply tight ETH-based lower bounds. Finally, we devise an algorithm parameterized by vertex cover for unweighted graphs. - We show that the complexity of the problem parameterized by tree-depth is 2^Theta(td^2) by showing an algorithm of this complexity and a tight ETH-based lower bound. We complement these mostly negative results by providing FPT approximation schemes parameterized by clique-width or treewidth which work efficiently independently of the values of k,r. In particular, we give algorithms which, for any epsilon>0, run in time O*((tw/epsilon)^O(tw)), O*((cw/epsilon)^O(cw)) and return a (k,(1+epsilon)r)-center, if a (k,r)-center exists, thus circumventing the problem\u27s W-hardness

    Dependent randomized rounding for clustering and partition systems with knapsack constraints

    Full text link
    Clustering problems are fundamental to unsupervised learning. There is an increased emphasis on fairness in machine learning and AI; one representative notion of fairness is that no single demographic group should be over-represented among the cluster-centers. This, and much more general clustering problems, can be formulated with "knapsack" and "partition" constraints. We develop new randomized algorithms targeting such problems, and study two in particular: multi-knapsack median and multi-knapsack center. Our rounding algorithms give new approximation and pseudo-approximation algorithms for these problems. One key technical tool, which may be of independent interest, is a new tail bound analogous to Feige (2006) for sums of random variables with unbounded variances. Such bounds are very useful in inferring properties of large networks using few samples

    Clustering with Faulty Centers

    Get PDF
    In this paper we introduce and formally study the problem of k-clustering with faulty centers. Specifically, we study the faulty versions of k-center, k-median, and k-means clustering, where centers have some probability of not existing, as opposed to prior work where clients had some probability of not existing. For all three problems we provide fixed parameter tractable algorithms, in the parameters k, d, and ?, that (1+?)-approximate the minimum expected cost solutions for points in d dimensional Euclidean space. For Faulty k-center we additionally provide a 5-approximation for general metrics. Significantly, all of our algorithms have a small dependence on n. Specifically, our Faulty k-center algorithms have only linear dependence on n, while for our algorithms for Faulty k-median and Faulty k-means the dependence is still only n^(1 + o(1))
    corecore