6 research outputs found

    Memetic Multilevel Hypergraph Partitioning

    Full text link
    Hypergraph partitioning has a wide range of important applications such as VLSI design or scientific computing. With focus on solution quality, we develop the first multilevel memetic algorithm to tackle the problem. Key components of our contribution are new effective multilevel recombination and mutation operations that provide a large amount of diversity. We perform a wide range of experiments on a benchmark set containing instances from application areas such VLSI, SAT solving, social networks, and scientific computing. Compared to the state-of-the-art hypergraph partitioning tools hMetis, PaToH, and KaHyPar, our new algorithm computes the best result on almost all instances

    Synthesis of IDDQ-Testable Circuits: Integrating Built-in Current Sensors

    Get PDF
    "On-Chip" I_{DDQ} testing by the incorporation of Built-In Current (BIC) sensors has some advantages over "off-chip" techniques. However, the integration of sensors poses analog design problems which are hard to be solved by a digital designer. The automatic incorporation of the sensors using parameterized BIC cells could be a promising alternative. The work reported here identifies partitioning criteria to guide the synthesis of I_{DDQ}-testable circuits. The circuit must be partitioned, such that the defective I_{DDQ} is observable, and the power supply voltage perturbation is within specified limits. In addition to these constraints, also cost criteria are considered: circuit extra delay, area overhead of the BIC sensors, connectivity costs of the test circuitry, and the test application time. The parameters are estimated based on logical as well as electrical level information of the target cell library to be used in the technology mapping phase of the synthesis process. The resulting cost function is optimized by an evolution-based algorithm. When run over large benchmark circuits our method gives significantly superior results to those obtained using simpler and less comprehensive partitioning methods.Postprint (published version

    Evolutionary Hypergraph Partitioning

    Get PDF

    Graph and hypergraph partitioning

    Get PDF
    Ankara : The Department of Computer Engineering and Information Science and the Institute of Engineering and Science, Bilkent Univ., 1993.Thesis (Master's) -- Bilkent University, 1993.Includes bibliographical references leaves 120-123Daşdan, AliM.S

    High-Quality Hypergraph Partitioning

    Get PDF
    This dissertation focuses on computing high-quality solutions for the NP-hard balanced hypergraph partitioning problem: Given a hypergraph and an integer kk, partition its vertex set into kk disjoint blocks of bounded size, while minimizing an objective function over the hyperedges. Here, we consider the two most commonly used objectives: the cut-net metric and the connectivity metric. Since the problem is computationally intractable, heuristics are used in practice - the most prominent being the three-phase multi-level paradigm: During coarsening, the hypergraph is successively contracted to obtain a hierarchy of smaller instances. After applying an initial partitioning algorithm to the smallest hypergraph, contraction is undone and, at each level, refinement algorithms try to improve the current solution. With this work, we give a brief overview of the field and present several algorithmic improvements to the multi-level paradigm. Instead of using a logarithmic number of levels like traditional algorithms, we present two coarsening algorithms that create a hierarchy of (nearly) nn levels, where nn is the number of vertices. This makes consecutive levels as similar as possible and provides many opportunities for refinement algorithms to improve the partition. This approach is made feasible in practice by tailoring all algorithms and data structures to the nn-level paradigm, and developing lazy-evaluation techniques, caching mechanisms and early stopping criteria to speed up the partitioning process. Furthermore, we propose a sparsification algorithm based on locality-sensitive hashing that improves the running time for hypergraphs with large hyperedges, and show that incorporating global information about the community structure into the coarsening process improves quality. Moreover, we present a portfolio-based initial partitioning approach, and propose three refinement algorithms. Two are based on the Fiduccia-Mattheyses (FM) heuristic, but perform a highly localized search at each level. While one is designed for two-way partitioning, the other is the first FM-style algorithm that can be efficiently employed in the multi-level setting to directly improve kk-way partitions. The third algorithm uses max-flow computations on pairs of blocks to refine kk-way partitions. Finally, we present the first memetic multi-level hypergraph partitioning algorithm for an extensive exploration of the global solution space. All contributions are made available through our open-source framework KaHyPar. In a comprehensive experimental study, we compare KaHyPar with hMETIS, PaToH, Mondriaan, Zoltan-AlgD, and HYPE on a wide range of hypergraphs from several application areas. Our results indicate that KaHyPar, already without the memetic component, computes better solutions than all competing algorithms for both the cut-net and the connectivity metric, while being faster than Zoltan-AlgD and equally fast as hMETIS. Moreover, KaHyPar compares favorably with the current best graph partitioning system KaFFPa - both in terms of solution quality and running time
    corecore