3 research outputs found

    Multilevel Hypergraph Partitioning with Vertex Weights Revisited

    Get PDF

    High-Quality Hypergraph Partitioning

    Get PDF
    This dissertation focuses on computing high-quality solutions for the NP-hard balanced hypergraph partitioning problem: Given a hypergraph and an integer kk, partition its vertex set into kk disjoint blocks of bounded size, while minimizing an objective function over the hyperedges. Here, we consider the two most commonly used objectives: the cut-net metric and the connectivity metric. Since the problem is computationally intractable, heuristics are used in practice - the most prominent being the three-phase multi-level paradigm: During coarsening, the hypergraph is successively contracted to obtain a hierarchy of smaller instances. After applying an initial partitioning algorithm to the smallest hypergraph, contraction is undone and, at each level, refinement algorithms try to improve the current solution. With this work, we give a brief overview of the field and present several algorithmic improvements to the multi-level paradigm. Instead of using a logarithmic number of levels like traditional algorithms, we present two coarsening algorithms that create a hierarchy of (nearly) nn levels, where nn is the number of vertices. This makes consecutive levels as similar as possible and provides many opportunities for refinement algorithms to improve the partition. This approach is made feasible in practice by tailoring all algorithms and data structures to the nn-level paradigm, and developing lazy-evaluation techniques, caching mechanisms and early stopping criteria to speed up the partitioning process. Furthermore, we propose a sparsification algorithm based on locality-sensitive hashing that improves the running time for hypergraphs with large hyperedges, and show that incorporating global information about the community structure into the coarsening process improves quality. Moreover, we present a portfolio-based initial partitioning approach, and propose three refinement algorithms. Two are based on the Fiduccia-Mattheyses (FM) heuristic, but perform a highly localized search at each level. While one is designed for two-way partitioning, the other is the first FM-style algorithm that can be efficiently employed in the multi-level setting to directly improve kk-way partitions. The third algorithm uses max-flow computations on pairs of blocks to refine kk-way partitions. Finally, we present the first memetic multi-level hypergraph partitioning algorithm for an extensive exploration of the global solution space. All contributions are made available through our open-source framework KaHyPar. In a comprehensive experimental study, we compare KaHyPar with hMETIS, PaToH, Mondriaan, Zoltan-AlgD, and HYPE on a wide range of hypergraphs from several application areas. Our results indicate that KaHyPar, already without the memetic component, computes better solutions than all competing algorithms for both the cut-net and the connectivity metric, while being faster than Zoltan-AlgD and equally fast as hMETIS. Moreover, KaHyPar compares favorably with the current best graph partitioning system KaFFPa - both in terms of solution quality and running time

    Iterative Partitioning With Varying Node Weights

    No full text
    The balanced partitioning problem divides the nodes of a [hyper]graph into groups of approximately equal weight (i.e., satisfying balance constraints) while minimizing the number of [hyper]edges that are cut (i.e., adjacent to nodes in di erent groups). Classic iterative algorithms use the pass paradigm [24] in performing single-node moves [16, 13] to improve the initial solution. To satisfy particular balance constraints, it is usual to require that intermediate solutions satisfy the constraints. Hence, many possible moves are rejected. Hypergraph partitioning heuristics have been traditionally proposed for and evaluated on hypergraphs with unit node weights only. Nevertheless, many real-world applications entail varying node weights, e.g., VLSI circuit partitioning where node weight typically represents cell area. Even when multilevel partitioning [3] is performed on unit-node-weight hypergraphs, intermediate clustered hypergraphs have varying node weights. Nothing prevents the use of conventional move-based heuristics when node weights vary, but their performance deteriorates, as shown by our analysis of partitioning results in [1]. We describe two e ects that cause this deterioration and propose simple modi cations of well-known algorithms to address them. Our baseline implementations achieve dramatic improvements over previously reported results (by factors of up to 25); explicitly addressing the described harmful e ects provides further improvement. Overall results are superior to those of the PROP-REX est algorithm reported in [14], which addresses similar problems.
    corecore