20 research outputs found

    Finding Near-Optimal Independent Sets at Scale

    Full text link
    The independent set problem is NP-hard and particularly difficult to solve in large sparse graphs. In this work, we develop an advanced evolutionary algorithm, which incorporates kernelization techniques to compute large independent sets in huge sparse networks. A recent exact algorithm has shown that large networks can be solved exactly by employing a branch-and-reduce technique that recursively kernelizes the graph and performs branching. However, one major drawback of their algorithm is that, for huge graphs, branching still can take exponential time. To avoid this problem, we recursively choose vertices that are likely to be in a large independent set (using an evolutionary approach), then further kernelize the graph. We show that identifying and removing vertices likely to be in large independent sets opens up the reduction space---which not only speeds up the computation of large independent sets drastically, but also enables us to compute high-quality independent sets on much larger instances than previously reported in the literature.Comment: 17 pages, 1 figure, 8 tables. arXiv admin note: text overlap with arXiv:1502.0168

    Scalable Kernelization for Maximum Independent Sets

    Get PDF
    The most efficient algorithms for finding maximum independent sets in both theory and practice use reduction rules to obtain a much smaller problem instance called a kernel. The kernel can then be solved quickly using exact or heuristic algorithms---or by repeatedly kernelizing recursively in the branch-and-reduce paradigm. It is of critical importance for these algorithms that kernelization is fast and returns a small kernel. Current algorithms are either slow but produce a small kernel, or fast and give a large kernel. We attempt to accomplish both of these goals simultaneously, by giving an efficient parallel kernelization algorithm based on graph partitioning and parallel bipartite maximum matching. We combine our parallelization techniques with two techniques to accelerate kernelization further: dependency checking that prunes reductions that cannot be applied, and reduction tracking that allows us to stop kernelization when reductions become less fruitful. Our algorithm produces kernels that are orders of magnitude smaller than the fastest kernelization methods, while having a similar execution time. Furthermore, our algorithm is able to compute kernels with size comparable to the smallest known kernels, but up to two orders of magnitude faster than previously possible. Finally, we show that our kernelization algorithm can be used to accelerate existing state-of-the-art heuristic algorithms, allowing us to find larger independent sets faster on large real-world networks and synthetic instances.Comment: Extended versio

    Doing More for Less -- Cache-Aware Parallel Contraction Hierarchies Preprocessing

    Full text link
    Contraction Hierarchies is a successful speedup-technique to Dijkstra's seminal shortest path algorithm that has a convenient trade-off between preprocessing and query times. We investigate a shared-memory parallel implementation that uses O(n+m)O(n+m) space for storing the graph and O(1) space for each core during preprocessing. The presented data structures and algorithms consequently exploits cache locality and thus exhibit competitive preprocessing times. The presented implementation is especially suitable for preprocessing graphs of planet-wide scale in practice. Also, our experiments show that optimal data structures in the PRAM model can be beaten in practice by exploiting memory cache hierarchies

    A fast and tight heuristic for A∗ in road networks

    Get PDF
    We study exact, efficient and practical algorithms for route planning in large road networks. Routing applications often require integrating the current traffic situation, planning ahead with traffic predictions for the future, respecting forbidden turns, and many other features depending on the exact application. While Dijkstra’s algorithm can be used to solve these problems, it is too slow for many applications. A* is a classical approach to accelerate Dijkstra’s algorithm. A* can support many extended scenarios without much additional implementation complexity. However, A*’s performance depends on the availability of a good heuristic that estimates distances. Computing tight distance estimates is a challenge on its own. On road networks, shortest paths can also be quickly computed using hierarchical speedup techniques. They achieve speed and exactness but sacrifice A*’s flexibility. Extending them to certain practical applications can be hard. In this paper, we present an algorithm to efficiently extract distance estimates for A* from Contraction Hierarchies (CH), a hierarchical technique. We call our heuristic CH-Potentials. Our approach allows decoupling the supported extensions from the hierarchical speed-up technique. Additionally, we describe A* optimizations to accelerate the processing of low degree nodes, which often occur in road networks

    Recent Advances in Graph Partitioning

    Full text link
    We survey recent trends in practical algorithms for balanced graph partitioning together with applications and future research directions

    Dynamic and stochastic routing for multimodal transportation systems

    Get PDF
    The authors present a case study of a multimodal routing system that takes into account both dynamic and stochastic travel time information. A multimodal network model is presented that makes it possible to model the travel time information of each transportation mode differently. This travel time information can either be static or dynamic, or either deterministic or stochastic. Next, a Dijkstra-based routing algorithm is presented that deals with this variety of travel time information in a uniform way. This research focuses on a practical implementation of the system, which means that a number of assumptions were made, like the modelling of the stochastic distributions, comparing these distributions, and so on. A tradeoff had to be made between the performance of the system and the accuracy of the results. Experiments have shown that the proposed system produces realistic routes in a short amount of time. It is demonstrated that routing dynamically indeed results in a travel time gain in comparison to routing statically. By making use of the additional stochastic travel time information even better (i.e. faster), more reliable routes can be calculated. Moreover, it is shown that routing in the multimodal network may have its advantages over routing in a unimodal network, especially during rush hours

    Targeted Branching for the Maximum Independent Set Problem

    Get PDF
    Finding a maximum independent set is a fundamental NP-hard problem that is used in many real-world applications. Given an unweighted graph, this problem asks for a maximum cardinality set of pairwise non-adjacent vertices. In recent years, some of the most successful algorithms for solving this problem are based on the branch-and-bound or branch-and-reduce paradigms. In particular, branch-and-reduce algorithms, which combine branch-and-bound with reduction rules, have been able to achieve substantial results, solving many previously infeasible real-world instances. These results were to a large part achieved by developing new, more practical reduction rules. However, other components that have been shown to have a significant impact on the performance of these algorithms have not received as much attention. One of these is the branching strategy, which determines what vertex is included or excluded in a potential solution. Even now, the most commonly used strategy selects vertices solely based on their degree and does not take into account other factors that contribute to the performance of the algorithm. In this work, we develop and evaluate several novel branching strategies for both branch-and-bound and branch-and-reduce algorithms. Our strategies are based on one of two approaches which are motivated by existing research. They either (1) aim to decompose the graph into two or more connected components which can then be solved independently, or (2) try to remove vertices that hinder the application of a reduction rule which can lead to smaller graphs. Our experimental evaluation on a large set of real-world instances indicates that our strategies are able to improve the performance of the state-of-the-art branch-and-reduce algorithm by Akiba and Iwata. To be more specific, our reduction-based packing branching rule is able to outperform the default branching strategy of selecting a vertex of highest degree on 65% of all instances tested. Furthermore, our decomposition-based strategy based on edge cuts is able to achieve a speedup of 2.29 on sparse networks (1.22 on all instances)
    corecore