2,050 research outputs found

    Approximate Clustering via Metric Partitioning

    Get PDF
    In this paper we consider two metric covering/clustering problems - \textit{Minimum Cost Covering Problem} (MCC) and kk-clustering. In the MCC problem, we are given two point sets XX (clients) and YY (servers), and a metric on XYX \cup Y. We would like to cover the clients by balls centered at the servers. The objective function to minimize is the sum of the α\alpha-th power of the radii of the balls. Here α1\alpha \geq 1 is a parameter of the problem (but not of a problem instance). MCC is closely related to the kk-clustering problem. The main difference between kk-clustering and MCC is that in kk-clustering one needs to select kk balls to cover the clients. For any \eps > 0, we describe quasi-polynomial time (1 + \eps) approximation algorithms for both of the problems. However, in case of kk-clustering the algorithm uses (1 + \eps)k balls. Prior to our work, a 3α3^{\alpha} and a cα{c}^{\alpha} approximation were achieved by polynomial-time algorithms for MCC and kk-clustering, respectively, where c>1c > 1 is an absolute constant. These two problems are thus interesting examples of metric covering/clustering problems that admit (1 + \eps)-approximation (using (1+\eps)k balls in case of kk-clustering), if one is willing to settle for quasi-polynomial time. In contrast, for the variant of MCC where α\alpha is part of the input, we show under standard assumptions that no polynomial time algorithm can achieve an approximation factor better than O(logX)O(\log |X|) for αlogX\alpha \geq \log |X|.Comment: 19 page

    Parameterized k-Clustering: Tractability Island

    Get PDF
    In k-Clustering we are given a multiset of n vectors X subset Z^d and a nonnegative number D, and we need to decide whether X can be partitioned into k clusters C_1, ..., C_k such that the cost sum_{i=1}^k min_{c_i in R^d} sum_{x in C_i} |x-c_i|_p^p <= D, where |*|_p is the Minkowski (L_p) norm of order p. For p=1, k-Clustering is the well-known k-Median. For p=2, the case of the Euclidean distance, k-Clustering is k-Means. We study k-Clustering from the perspective of parameterized complexity. The problem is known to be NP-hard for k=2 and it is also NP-hard for d=2. It is a long-standing open question, whether the problem is fixed-parameter tractable (FPT) for the combined parameter d+k. In this paper, we focus on the parameterization by D. We complement the known negative results by showing that for p=0 and p=infty, k-Clustering is W1-hard when parameterized by D. Interestingly, the complexity landscape of the problem appears to be more intricate than expected. We discover a tractability island of k-Clustering: for every p in (0,1], k-Clustering is solvable in time 2^O(D log D) (nd)^O(1)

    Parameterized k-Clustering: Tractability Island

    Get PDF
    In k-Clustering we are given a multiset of n vectors X subset Z^d and a nonnegative number D, and we need to decide whether X can be partitioned into k clusters C_1, ..., C_k such that the cost sum_{i=1}^k min_{c_i in R^d} sum_{x in C_i} |x-c_i|_p^p <= D, where |*|_p is the Minkowski (L_p) norm of order p. For p=1, k-Clustering is the well-known k-Median. For p=2, the case of the Euclidean distance, k-Clustering is k-Means. We study k-Clustering from the perspective of parameterized complexity. The problem is known to be NP-hard for k=2 and it is also NP-hard for d=2. It is a long-standing open question, whether the problem is fixed-parameter tractable (FPT) for the combined parameter d+k. In this paper, we focus on the parameterization by D. We complement the known negative results by showing that for p=0 and p=infty, k-Clustering is W1-hard when parameterized by D. Interestingly, the complexity landscape of the problem appears to be more intricate than expected. We discover a tractability island of k-Clustering: for every p in (0,1], k-Clustering is solvable in time 2^O(D log D) (nd)^O(1).publishedVersio

    Approximating min-max k-clustering

    Get PDF
    We consider the problems of set partitioning into kk clusters with minimum of the maximum cost of a cluster. The cost function is given by an oracle, and we assume that it satisfies some natural structural constraints. That is, we assume that the cost function is monotone, the cost of a singleton is zero, and we assume that for all ScapS2˘7eqemptysetS cap S\u27 eq emptyset the following holds c(S)+c(S2˘7)geqc(ScupS2˘7)c(S) + c(S\u27) geq c(S cup S\u27). For this problem we present a (2k1)(2k-1)-approximation algorithm for kgeq3kgeq 3, a 2-approximation algorithm for k=2k=2, and we also show a lower bound of kk on the performance guarantee of any polynomial-time algorithm. We then consider special cases of this problem arising in vehicle routing problems, and present improved results

    Self-stabilizing k-clustering in mobile ad hoc networks

    Full text link
    In this thesis, two silent self-stabilizing asynchronous distributed algorithms are given for constructing a k-clustering of a connected network of processes. These are the first self-stabilizing solutions to this problem. One algorithm, FLOOD, takes O( k) time and uses O(k log n) space per process, while the second algorithm, BFS-MIS-CLSTR, takes O(n) time and uses O(log n) space; where n is the size of the network. Processes have unique IDs, and there is no designated leader. BFS-MIS-CLSTR solves three problems; it elects a leader and constructs a BFS tree for the network, constructs a minimal independent set, and finally a k-clustering. Finding a minimal k-clustering is known to be NP -hard. If the network is a unit disk graph in a plane, BFS-MIS-CLSTR is within a factor of O(7.2552k) of choosing the minimal number of clusters; A lower bound is given, showing that any comparison-based algorithm for the k-clustering problem that takes o( diam) rounds has very bad worst case performance; Keywords: BFS tree construction, K-clustering, leader election, MIS construction, self-stabilization, unit disk graph

    Approximation Schemes for Min-Sum k-Clustering

    Get PDF
    We consider the Min-Sum k-Clustering (k-MSC) problem. Given a set of points in a metric which is represented by an edge-weighted graph G = (V, E) and a parameter k, the goal is to partition the points V into k clusters such that the sum of distances between all pairs of the points within the same cluster is minimized. The k-MSC problem is known to be APX-hard on general metrics. The best known approximation algorithms for the problem obtained by Behsaz, Friggstad, Salavatipour and Sivakumar [Algorithmica 2019] achieve an approximation ratio of O(log |V|) in polynomial time for general metrics and an approximation ratio 2+? in quasi-polynomial time for metrics with bounded doubling dimension. No approximation schemes for k-MSC (when k is part of the input) is known for any non-trivial metrics prior to our work. In fact, most of the previous works rely on the simple fact that there is a 2-approximate reduction from k-MSC to the balanced k-median problem and design approximation algorithms for the latter to obtain an approximation for k-MSC. In this paper, we obtain the first Quasi-Polynomial Time Approximation Schemes (QPTAS) for the problem on metrics induced by graphs of bounded treewidth, graphs of bounded highway dimension, graphs of bounded doubling dimensions (including fixed dimensional Euclidean metrics), and planar and minor-free graphs. We bypass the barrier of 2 for k-MSC by introducing a new clustering problem, which we call min-hub clustering, which is a generalization of balanced k-median and is a trade off between center-based clustering problems (such as balanced k-median) and pair-wise clustering (such as Min-Sum k-clustering). We then show how one can find approximation schemes for Min-hub clustering on certain classes of metrics
    corecore