189 research outputs found

    Finding Near-Optimal Independent Sets at Scale

    Full text link
    The independent set problem is NP-hard and particularly difficult to solve in large sparse graphs. In this work, we develop an advanced evolutionary algorithm, which incorporates kernelization techniques to compute large independent sets in huge sparse networks. A recent exact algorithm has shown that large networks can be solved exactly by employing a branch-and-reduce technique that recursively kernelizes the graph and performs branching. However, one major drawback of their algorithm is that, for huge graphs, branching still can take exponential time. To avoid this problem, we recursively choose vertices that are likely to be in a large independent set (using an evolutionary approach), then further kernelize the graph. We show that identifying and removing vertices likely to be in large independent sets opens up the reduction space---which not only speeds up the computation of large independent sets drastically, but also enables us to compute high-quality independent sets on much larger instances than previously reported in the literature.Comment: 17 pages, 1 figure, 8 tables. arXiv admin note: text overlap with arXiv:1502.0168

    A conceptual heuristic for solving the maximum clique problem

    Get PDF
    The maximum clique problem (MCP) is the problem of finding the clique with maximum cardinality in a graph. It has been intensively studied for years by computer scientists and mathematicians. It has many practical applications and it is usually the computational bottleneck. Due to the complexity of the problem, exact solutions can be very computationally expensive. In the scope of this thesis, a polynomial time heuristic that is based on Formal Concept Analysis has been developed. The developed approach has three variations that use different algorithm design approaches to solve the problem, a greedy algorithm, a backtracking algorithm and a branch and bound algorithm. The parameters of the branch and bound algorithm are tuned in a training phase and the tuned parameters are tested on the BHOSLIB benchmark graphs. The developed approach is tested on all the instances of the DIMACS benchmark graphs, and the results show that the maximum clique is obtained for 70% of the graph instances. The developed approach is compared to several of the most effective recent algorithms.NPRP grant #06-1220-1-233 from the Qatar National Research Fund (a member of Qatar Foundation)

    Dynamic Local Search for the Maximum Clique Problem

    Full text link
    In this paper, we introduce DLS-MC, a new stochastic local search algorithm for the maximum clique problem. DLS-MC alternates between phases of iterative improvement, during which suitable vertices are added to the current clique, and plateau search, during which vertices of the current clique are swapped with vertices not contained in the current clique. The selection of vertices is solely based on vertex penalties that are dynamically adjusted during the search, and a perturbation mechanism is used to overcome search stagnation. The behaviour of DLS-MC is controlled by a single parameter, penalty delay, which controls the frequency at which vertex penalties are reduced. We show empirically that DLS-MC achieves substantial performance improvements over state-of-the-art algorithms for the maximum clique problem over a large range of the commonly used DIMACS benchmark instances

    High-Quality Hypergraph Partitioning

    Get PDF
    This dissertation focuses on computing high-quality solutions for the NP-hard balanced hypergraph partitioning problem: Given a hypergraph and an integer kk, partition its vertex set into kk disjoint blocks of bounded size, while minimizing an objective function over the hyperedges. Here, we consider the two most commonly used objectives: the cut-net metric and the connectivity metric. Since the problem is computationally intractable, heuristics are used in practice - the most prominent being the three-phase multi-level paradigm: During coarsening, the hypergraph is successively contracted to obtain a hierarchy of smaller instances. After applying an initial partitioning algorithm to the smallest hypergraph, contraction is undone and, at each level, refinement algorithms try to improve the current solution. With this work, we give a brief overview of the field and present several algorithmic improvements to the multi-level paradigm. Instead of using a logarithmic number of levels like traditional algorithms, we present two coarsening algorithms that create a hierarchy of (nearly) nn levels, where nn is the number of vertices. This makes consecutive levels as similar as possible and provides many opportunities for refinement algorithms to improve the partition. This approach is made feasible in practice by tailoring all algorithms and data structures to the nn-level paradigm, and developing lazy-evaluation techniques, caching mechanisms and early stopping criteria to speed up the partitioning process. Furthermore, we propose a sparsification algorithm based on locality-sensitive hashing that improves the running time for hypergraphs with large hyperedges, and show that incorporating global information about the community structure into the coarsening process improves quality. Moreover, we present a portfolio-based initial partitioning approach, and propose three refinement algorithms. Two are based on the Fiduccia-Mattheyses (FM) heuristic, but perform a highly localized search at each level. While one is designed for two-way partitioning, the other is the first FM-style algorithm that can be efficiently employed in the multi-level setting to directly improve kk-way partitions. The third algorithm uses max-flow computations on pairs of blocks to refine kk-way partitions. Finally, we present the first memetic multi-level hypergraph partitioning algorithm for an extensive exploration of the global solution space. All contributions are made available through our open-source framework KaHyPar. In a comprehensive experimental study, we compare KaHyPar with hMETIS, PaToH, Mondriaan, Zoltan-AlgD, and HYPE on a wide range of hypergraphs from several application areas. Our results indicate that KaHyPar, already without the memetic component, computes better solutions than all competing algorithms for both the cut-net and the connectivity metric, while being faster than Zoltan-AlgD and equally fast as hMETIS. Moreover, KaHyPar compares favorably with the current best graph partitioning system KaFFPa - both in terms of solution quality and running time

    Algorithms for Computing Edge-Connected Subgraphs

    Get PDF
    This thesis concentrates on algorithms for finding all the maximal k-edge-connected components in a given graph G = (V, E) where V and E represent the set of vertices and the set of edges, respectively, which are further used to develop a scale reduction procedure for the maximum clique problem. The proposed scale-reduction approach is based on the observation that a subset C of k + 1 vertices is a clique if and only if one needs to remove at least k edges in order to disconnect the corresponding induced subgraph G[C] (that is, G[C] is k-edge-connected). Thus, any clique consisting of k + 1 or more vertices must be a subset of a single k-edge connected component of the graph. This motivates us to look for subgraphs with edge connectivity at least k in a given graph G, for an appropriately selected k value. We employ the method based on the concept of the auxiliary graph, previously proposed in the literature, for finding all maximal k-edge-connected subgraphs. This method processes the input graph G to construct a tree-like graphic structure A, which stores the information of the edge connectivity between each pair of vertices of the graph G. Moreover, this method could provide us the maximal k-edge-connected components for all possible k and it shares the same vertex set V with the graph G. With the information from the auxiliary graph, we implement the scale reduction procedure for the maximum clique problem on sparse graphs based on the k-edge-connected subgraphs with appropriately selected values of k. Furthermore, we performed computational experiments to evaluate the performance of the proposed scale reduction and compare it to the previously used k-core method. The comparison results present the advancement of the scale reduction with k-edge-connected subgraphs. Even though our scale reduction algorithm based has higher time complexity, it is still of interest and deserves further investigation

    Engineering an Efficient Branch-and-Reduce Algorithm for the Minimum Vertex Cover Problem

    Get PDF
    The Minimum Vertex Cover problem asks us to find a minimum set of vertices in a graph such that each edge of the graph is incident to at least one vertex of the set. It is a classical NP-hard problem and in the past researchers have suggested both exact algorithms and heuristic approaches to tackle the problem. In this thesis, we improve Akiba and Iwata’s branch-and-reduce algorithm, which is one of the fastest exact algorithms in the field, by developing three techniques: dependency checking, caching solutions and feeding an initial high quality solution to accelerate the algorithm’s performance. We are able to achieve speedups of up to 3.5 on graphs where the algorithm of Akiba and Iwata is slow. On one such graph, the Stanford web graph, our techniques are especially effective, reducing the runtime from 16 hours to only 4.6 hours

    Logic learning and optimized drawing: two hard combinatorial problems

    Get PDF
    Nowadays, information extraction from large datasets is a recurring operation in countless fields of applications. The purpose leading this thesis is to ideally follow the data flow along its journey, describing some hard combinatorial problems that arise from two key processes, one consecutive to the other: information extraction and representation. The approaches here considered will focus mainly on metaheuristic algorithms, to address the need for fast and effective optimization methods. The problems studied include data extraction instances, as Supervised Learning in Logic Domains and the Max Cut-Clique Problem, as well as two different Graph Drawing Problems. Moreover, stemming from these main topics, other additional themes will be discussed, namely two different approaches to handle Information Variability in Combinatorial Optimization Problems (COPs), and Topology Optimization of lightweight concrete structures
    • …
    corecore