1,060 research outputs found

    Approximating kk-Median via Pseudo-Approximation

    Full text link
    We present a novel approximation algorithm for kk-median that achieves an approximation guarantee of 1+3+ϵ1+\sqrt{3}+\epsilon, improving upon the decade-old ratio of 3+ϵ3+\epsilon. Our approach is based on two components, each of which, we believe, is of independent interest. First, we show that in order to give an α\alpha-approximation algorithm for kk-median, it is sufficient to give a \emph{pseudo-approximation algorithm} that finds an α\alpha-approximate solution by opening k+O(1)k+O(1) facilities. This is a rather surprising result as there exist instances for which opening k+1k+1 facilities may lead to a significant smaller cost than if only kk facilities were opened. Second, we give such a pseudo-approximation algorithm with α=1+3+ϵ\alpha= 1+\sqrt{3}+\epsilon. Prior to our work, it was not even known whether opening k+o(k)k + o(k) facilities would help improve the approximation ratio.Comment: 18 page

    An optimal bifactor approximation algorithm for the metric uncapacitated facility location problem

    Full text link
    We obtain a 1.5-approximation algorithm for the metric uncapacitated facility location problem (UFL), which improves on the previously best known 1.52-approximation algorithm by Mahdian, Ye and Zhang. Note, that the approximability lower bound by Guha and Khuller is 1.463. An algorithm is a {\em (λf\lambda_f,λc\lambda_c)-approximation algorithm} if the solution it produces has total cost at most λf⋅F∗+λc⋅C∗\lambda_f \cdot F^* + \lambda_c \cdot C^*, where F∗F^* and C∗C^* are the facility and the connection cost of an optimal solution. Our new algorithm, which is a modification of the (1+2/e)(1+2/e)-approximation algorithm of Chudak and Shmoys, is a (1.6774,1.3738)-approximation algorithm for the UFL problem and is the first one that touches the approximability limit curve (γf,1+2e−γf)(\gamma_f, 1+2e^{-\gamma_f}) established by Jain, Mahdian and Saberi. As a consequence, we obtain the first optimal approximation algorithm for instances dominated by connection costs. When combined with a (1.11,1.7764)-approximation algorithm proposed by Jain et al., and later analyzed by Mahdian et al., we obtain the overall approximation guarantee of 1.5 for the metric UFL problem. We also describe how to use our algorithm to improve the approximation ratio for the 3-level version of UFL.Comment: A journal versio

    Efficiently Approximating Vertex Cover on Scale-Free Networks with Underlying Hyperbolic Geometry

    Get PDF
    Finding a minimum vertex cover in a network is a fundamental NP-complete graph problem. One way to deal with its computational hardness, is to trade the qualitative performance of an algorithm (allowing non-optimal outputs) for an improved running time. For the vertex cover problem, there is a gap between theory and practice when it comes to understanding this tradeoff. On the one hand, it is known that it is NP-hard to approximate a minimum vertex cover within a factor of ?2. On the other hand, a simple greedy algorithm yields close to optimal approximations in practice. A promising approach towards understanding this discrepancy is to recognize the differences between theoretical worst-case instances and real-world networks. Following this direction, we close the gap between theory and practice by providing an algorithm that efficiently computes nearly optimal vertex cover approximations on hyperbolic random graphs; a network model that closely resembles real-world networks in terms of degree distribution, clustering, and the small-world property. More precisely, our algorithm computes a (1 + o(1))-approximation, asymptotically almost surely, and has a running time of ?(m log(n)). The proposed algorithm is an adaption of the successful greedy approach, enhanced with a procedure that improves on parts of the graph where greedy is not optimal. This makes it possible to introduce a parameter that can be used to tune the tradeoff between approximation performance and running time. Our empirical evaluation on real-world networks shows that this allows for improving over the near-optimal results of the greedy approach

    Approximation algorithms for stochastic clustering

    Full text link
    We consider stochastic settings for clustering, and develop provably-good approximation algorithms for a number of these notions. These algorithms yield better approximation ratios compared to the usual deterministic clustering setting. Additionally, they offer a number of advantages including clustering which is fairer and has better long-term behavior for each user. In particular, they ensure that *every user* is guaranteed to get good service (on average). We also complement some of these with impossibility results

    Understanding the Role of Adaptivity in Machine Teaching: The Case of Version Space Learners

    Get PDF
    In real-world applications of education, an effective teacher adaptively chooses the next example to teach based on the learner's current state. However, most existing work in algorithmic machine teaching focuses on the batch setting, where adaptivity plays no role. In this paper, we study the case of teaching consistent, version space learners in an interactive setting. At any time step, the teacher provides an example, the learner performs an update, and the teacher observes the learner's new state. We highlight that adaptivity does not speed up the teaching process when considering existing models of version space learners, such as "worst-case" (the learner picks the next hypothesis randomly from the version space) and "preference-based" (the learner picks hypothesis according to some global preference). Inspired by human teaching, we propose a new model where the learner picks hypotheses according to some local preference defined by the current hypothesis. We show that our model exhibits several desirable properties, e.g., adaptivity plays a key role, and the learner's transitions over hypotheses are smooth/interpretable. We develop efficient teaching algorithms and demonstrate our results via simulation and user studies.Comment: NeurIPS 2018 (extended version

    The Traveling Salesman Problem: Low-Dimensionality Implies a Polynomial Time Approximation Scheme

    Full text link
    The Traveling Salesman Problem (TSP) is among the most famous NP-hard optimization problems. We design for this problem a randomized polynomial-time algorithm that computes a (1+eps)-approximation to the optimal tour, for any fixed eps>0, in TSP instances that form an arbitrary metric space with bounded intrinsic dimension. The celebrated results of Arora (A-98) and Mitchell (M-99) prove that the above result holds in the special case of TSP in a fixed-dimensional Euclidean space. Thus, our algorithm demonstrates that the algorithmic tractability of metric TSP depends on the dimensionality of the space and not on its specific geometry. This result resolves a problem that has been open since the quasi-polynomial time algorithm of Talwar (T-04)
    • …
    corecore