2,149 research outputs found

    Greedy Randomized Adaptive Search and Variable Neighbourhood Search for the minimum labelling spanning tree problem

    Get PDF
    This paper studies heuristics for the minimum labelling spanning tree (MLST) problem. The purpose is to find a spanning tree using edges that are as similar as possible. Given an undirected labelled connected graph, the minimum labelling spanning tree problem seeks a spanning tree whose edges have the smallest number of distinct labels. This problem has been shown to be NP-hard. A Greedy Randomized Adaptive Search Procedure (GRASP) and a Variable Neighbourhood Search (VNS) are proposed in this paper. They are compared with other algorithms recommended in the literature: the Modified Genetic Algorithm and the Pilot Method. Nonparametric statistical tests show that the heuristics based on GRASP and VNS outperform the other algorithms tested. Furthermore, a comparison with the results provided by an exact approach shows that we may quickly obtain optimal or near-optimal solutions with the proposed heuristics

    Scalable Algorithms for Community Detection in Very Large Graphs

    Get PDF

    Algorithms for Approximate Minimization of the Difference Between Submodular Functions, with Applications

    Full text link
    We extend the work of Narasimhan and Bilmes [30] for minimizing set functions representable as a difference between submodular functions. Similar to [30], our new algorithms are guaranteed to monotonically reduce the objective function at every step. We empirically and theoretically show that the per-iteration cost of our algorithms is much less than [30], and our algorithms can be used to efficiently minimize a difference between submodular functions under various combinatorial constraints, a problem not previously addressed. We provide computational bounds and a hardness result on the mul- tiplicative inapproximability of minimizing the difference between submodular functions. We show, however, that it is possible to give worst-case additive bounds by providing a polynomial time computable lower-bound on the minima. Finally we show how a number of machine learning problems can be modeled as minimizing the difference between submodular functions. We experimentally show the validity of our algorithms by testing them on the problem of feature selection with submodular cost features.Comment: 17 pages, 8 figures. A shorter version of this appeared in Proc. Uncertainty in Artificial Intelligence (UAI), Catalina Islands, 201

    Constrained Non-Monotone Submodular Maximization: Offline and Secretary Algorithms

    Full text link
    Constrained submodular maximization problems have long been studied, with near-optimal results known under a variety of constraints when the submodular function is monotone. The case of non-monotone submodular maximization is less understood: the first approximation algorithms even for the unconstrainted setting were given by Feige et al. (FOCS '07). More recently, Lee et al. (STOC '09, APPROX '09) show how to approximately maximize non-monotone submodular functions when the constraints are given by the intersection of p matroid constraints; their algorithm is based on local-search procedures that consider p-swaps, and hence the running time may be n^Omega(p), implying their algorithm is polynomial-time only for constantly many matroids. In this paper, we give algorithms that work for p-independence systems (which generalize constraints given by the intersection of p matroids), where the running time is poly(n,p). Our algorithm essentially reduces the non-monotone maximization problem to multiple runs of the greedy algorithm previously used in the monotone case. Our idea of using existing algorithms for monotone functions to solve the non-monotone case also works for maximizing a submodular function with respect to a knapsack constraint: we get a simple greedy-based constant-factor approximation for this problem. With these simpler algorithms, we are able to adapt our approach to constrained non-monotone submodular maximization to the (online) secretary setting, where elements arrive one at a time in random order, and the algorithm must make irrevocable decisions about whether or not to select each element as it arrives. We give constant approximations in this secretary setting when the algorithm is constrained subject to a uniform matroid or a partition matroid, and give an O(log k) approximation when it is constrained by a general matroid of rank k.Comment: In the Proceedings of WINE 201
    corecore