6,757 research outputs found

    A nonmonotone GRASP

    Get PDF
    A greedy randomized adaptive search procedure (GRASP) is an itera- tive multistart metaheuristic for difficult combinatorial optimization problems. Each GRASP iteration consists of two phases: a construction phase, in which a feasible solution is produced, and a local search phase, in which a local optimum in the neighborhood of the constructed solution is sought. Repeated applications of the con- struction procedure yields different starting solutions for the local search and the best overall solution is kept as the result. The GRASP local search applies iterative improvement until a locally optimal solution is found. During this phase, starting from the current solution an improving neighbor solution is accepted and considered as the new current solution. In this paper, we propose a variant of the GRASP framework that uses a new “nonmonotone” strategy to explore the neighborhood of the current solu- tion. We formally state the convergence of the nonmonotone local search to a locally optimal solution and illustrate the effectiveness of the resulting Nonmonotone GRASP on three classical hard combinatorial optimization problems: the maximum cut prob- lem (MAX-CUT), the weighted maximum satisfiability problem (MAX-SAT), and the quadratic assignment problem (QAP)

    Greedy MAXCUT Algorithms and their Information Content

    Full text link
    MAXCUT defines a classical NP-hard problem for graph partitioning and it serves as a typical case of the symmetric non-monotone Unconstrained Submodular Maximization (USM) problem. Applications of MAXCUT are abundant in machine learning, computer vision and statistical physics. Greedy algorithms to approximately solve MAXCUT rely on greedy vertex labelling or on an edge contraction strategy. These algorithms have been studied by measuring their approximation ratios in the worst case setting but very little is known to characterize their robustness to noise contaminations of the input data in the average case. Adapting the framework of Approximation Set Coding, we present a method to exactly measure the cardinality of the algorithmic approximation sets of five greedy MAXCUT algorithms. Their information contents are explored for graph instances generated by two different noise models: the edge reversal model and Gaussian edge weights model. The results provide insights into the robustness of different greedy heuristics and techniques for MAXCUT, which can be used for algorithm design of general USM problems.Comment: This is a longer version of the paper published in 2015 IEEE Information Theory Workshop (ITW

    Cut Tree Construction from Massive Graphs

    Full text link
    The construction of cut trees (also known as Gomory-Hu trees) for a given graph enables the minimum-cut size of the original graph to be obtained for any pair of vertices. Cut trees are a powerful back-end for graph management and mining, as they support various procedures related to the minimum cut, maximum flow, and connectivity. However, the crucial drawback with cut trees is the computational cost of their construction. In theory, a cut tree is built by applying a maximum flow algorithm for nn times, where nn is the number of vertices. Therefore, naive implementations of this approach result in cubic time complexity, which is obviously too slow for today's large-scale graphs. To address this issue, in the present study, we propose a new cut-tree construction algorithm tailored to real-world networks. Using a series of experiments, we demonstrate that the proposed algorithm is several orders of magnitude faster than previous algorithms and it can construct cut trees for billion-scale graphs.Comment: Short version will appear at ICDM'1

    Probabilistic Analysis of Optimization Problems on Generalized Random Shortest Path Metrics

    Get PDF
    Simple heuristics often show a remarkable performance in practice for optimization problems. Worst-case analysis often falls short of explaining this performance. Because of this, "beyond worst-case analysis" of algorithms has recently gained a lot of attention, including probabilistic analysis of algorithms. The instances of many optimization problems are essentially a discrete metric space. Probabilistic analysis for such metric optimization problems has nevertheless mostly been conducted on instances drawn from Euclidean space, which provides a structure that is usually heavily exploited in the analysis. However, most instances from practice are not Euclidean. Little work has been done on metric instances drawn from other, more realistic, distributions. Some initial results have been obtained by Bringmann et al. (Algorithmica, 2013), who have used random shortest path metrics on complete graphs to analyze heuristics. The goal of this paper is to generalize these findings to non-complete graphs, especially Erd\H{o}s-R\'enyi random graphs. A random shortest path metric is constructed by drawing independent random edge weights for each edge in the graph and setting the distance between every pair of vertices to the length of a shortest path between them with respect to the drawn weights. For such instances, we prove that the greedy heuristic for the minimum distance maximum matching problem, the nearest neighbor and insertion heuristics for the traveling salesman problem, and a trivial heuristic for the kk-median problem all achieve a constant expected approximation ratio. Additionally, we show a polynomial upper bound for the expected number of iterations of the 2-opt heuristic for the traveling salesman problem.Comment: An extended abstract appeared in the proceedings of WALCOM 201

    Improving Optimization Bounds using Machine Learning: Decision Diagrams meet Deep Reinforcement Learning

    Full text link
    Finding tight bounds on the optimal solution is a critical element of practical solution methods for discrete optimization problems. In the last decade, decision diagrams (DDs) have brought a new perspective on obtaining upper and lower bounds that can be significantly better than classical bounding mechanisms, such as linear relaxations. It is well known that the quality of the bounds achieved through this flexible bounding method is highly reliant on the ordering of variables chosen for building the diagram, and finding an ordering that optimizes standard metrics is an NP-hard problem. In this paper, we propose an innovative and generic approach based on deep reinforcement learning for obtaining an ordering for tightening the bounds obtained with relaxed and restricted DDs. We apply the approach to both the Maximum Independent Set Problem and the Maximum Cut Problem. Experimental results on synthetic instances show that the deep reinforcement learning approach, by achieving tighter objective function bounds, generally outperforms ordering methods commonly used in the literature when the distribution of instances is known. To the best knowledge of the authors, this is the first paper to apply machine learning to directly improve relaxation bounds obtained by general-purpose bounding mechanisms for combinatorial optimization problems.Comment: Accepted and presented at AAAI'1

    Exact Localisations of Feedback Sets

    Full text link
    The feedback arc (vertex) set problem, shortened FASP (FVSP), is to transform a given multi digraph G=(V,E)G=(V,E) into an acyclic graph by deleting as few arcs (vertices) as possible. Due to the results of Richard M. Karp in 1972 it is one of the classic NP-complete problems. An important contribution of this paper is that the subgraphs Gel(e)G_{\mathrm{el}}(e), Gsi(e)G_{\mathrm{si}}(e) of all elementary cycles or simple cycles running through some arc eEe \in E, can be computed in O(E2)\mathcal{O}\big(|E|^2\big) and O(E4)\mathcal{O}(|E|^4), respectively. We use this fact and introduce the notion of the essential minor and isolated cycles, which yield a priori problem size reductions and in the special case of so called resolvable graphs an exact solution in O(VE3)\mathcal{O}(|V||E|^3). We show that weighted versions of the FASP and FVSP possess a Bellman decomposition, which yields exact solutions using a dynamic programming technique in times O(2mE4log(V))\mathcal{O}\big(2^{m}|E|^4\log(|V|)\big) and O(2nΔ(G)4V4log(E))\mathcal{O}\big(2^{n}\Delta(G)^4|V|^4\log(|E|)\big), where mEV+1m \leq |E|-|V| +1, n(Δ(G)1)VE+1n \leq (\Delta(G)-1)|V|-|E| +1, respectively. The parameters m,nm,n can be computed in O(E3)\mathcal{O}(|E|^3), O(Δ(G)3V3)\mathcal{O}(\Delta(G)^3|V|^3), respectively and denote the maximal dimension of the cycle space of all appearing meta graphs, decoding the intersection behavior of the cycles. Consequently, m,nm,n equal zero if all meta graphs are trees. Moreover, we deliver several heuristics and discuss how to control their variation from the optimum. Summarizing, the presented results allow us to suggest a strategy for an implementation of a fast and accurate FASP/FVSP-SOLVER

    How to make a greedy heuristic for the asymmetric traveling salesman problem competitive

    Get PDF
    It is widely confirmed by many computational experiments that a greedy type heuristics for the Traveling Salesman Problem (TSP) produces rather poor solutions except for the Euclidean TSP. The selection of arcs to be included by a greedy heuristic is usually done on the base of cost values. We propose to use upper tolerances of an optimal solution to one of the relaxed Asymmetric TSP (ATSP) to guide the selection of an arc to be included in the final greedy solution. Even though it needs time to calculate tolerances, our computational experiments for the wide range of ATSP instances show that tolerance based greedy heuristics is much more accurate an faster than previously reported greedy type algorithms
    corecore