10 research outputs found
Aggregative Coarsening for Multilevel Hypergraph Partitioning
Algorithms for many hypergraph problems, including partitioning, utilize multilevel frameworks to achieve a good trade-off between the performance and the quality of results. In this paper we introduce two novel aggregative coarsening schemes and incorporate them within state-of-the-art hypergraph partitioner Zoltan. Our coarsening schemes are inspired by the algebraic multigrid and stable matching approaches. We demonstrate the effectiveness of the developed schemes as a part of multilevel hypergraph partitioning framework on a wide range of problems
Hypergraph Partitioning With Embeddings
Problems in scientific computing, such as distributing large sparse matrix
operations, have analogous formulations as hypergraph partitioning problems. A
hypergraph is a generalization of a traditional graph wherein "hyperedges" may
connect any number of nodes. As a result, hypergraph partitioning is an NP-Hard
problem to both solve or approximate. State-of-the-art algorithms that solve
this problem follow the multilevel paradigm, which begins by iteratively
"coarsening" the input hypergraph to smaller problem instances that share key
structural features. Once identifying an approximate problem that is small
enough to be solved directly, that solution can be interpolated and refined to
the original problem. While this strategy represents an excellent trade off
between quality and running time, it is sensitive to coarsening strategy. In
this work we propose using graph embeddings of the initial hypergraph in order
to ensure that coarsened problem instances retrain key structural features. Our
approach prioritizes coarsening within self-similar regions within the input
graph, and leads to significantly improved solution quality across a range of
considered hypergraphs. Reproducibility: All source code, plots and
experimental data are available at https://sybrandt.com/2019/partition
Quantum and Classical Multilevel Algorithms for (Hyper)Graphs
Combinatorial optimization problems on (hyper)graphs are ubiquitous in science and industry. Because many of these problems are NP-hard, development of sophisticated heuristics is of utmost importance for practical problems. In recent years, the emergence of Noisy Intermediate-Scale Quantum (NISQ) computers has opened up the opportunity to dramaticaly speedup combinatorial optimization. However, the adoption of NISQ devices is impeded by their severe limitations, both in terms of the number of qubits, as well as in their quality. NISQ devices are widely expected to have no more than hundreds to thousands of qubits with very limited error-correction, imposing a strict limit on the size and the structure of the problems that can be tackled directly. A natural solution to this issue is hybrid quantum-classical algorithms that combine a NISQ device with a classical machine with the goal of capturing “the best of both worlds”.
Being motivated by lack of high quality optimization solvers for hypergraph partitioning, in this thesis, we begin by discussing classical multilevel approaches for this problem. We present a novel relaxation-based vertex similarity measure termed algebraic distance for hypergraphs and the coarsening schemes based on it. Extending the multilevel method to include quantum optimization routines, we present Quantum Local Search (QLS) – a hybrid iterative improvement approach that is inspired by the classical local search approaches. Next, we introduce the Multilevel Quantum Local Search (ML-QLS) that incorporates the quantum-enhanced iterative improvement scheme introduced in QLS within the multilevel framework, as well as several techniques to further understand and improve the effectiveness of Quantum Approximate Optimization Algorithm used throughout our work
High-Quality Hypergraph Partitioning
This dissertation focuses on computing high-quality solutions for the NP-hard balanced hypergraph partitioning problem: Given a hypergraph and an integer , partition its vertex set into disjoint blocks of bounded size, while minimizing an objective function over the hyperedges. Here, we consider the two most commonly used objectives: the cut-net metric and the connectivity metric.
Since the problem is computationally intractable, heuristics are used in practice - the most prominent being the three-phase multi-level paradigm: During coarsening, the hypergraph is successively contracted to obtain a hierarchy of smaller instances. After applying an initial partitioning algorithm to the smallest hypergraph, contraction is undone and, at each level, refinement algorithms try to improve the current solution.
With this work, we give a brief overview of the field and present several algorithmic improvements to the multi-level paradigm. Instead of using a logarithmic number of levels like traditional algorithms, we present two coarsening algorithms that create a hierarchy of (nearly) levels, where is the number of vertices. This makes consecutive levels as similar as possible and provides many opportunities for refinement algorithms to improve the partition. This approach is made feasible in practice by tailoring all algorithms and data structures to the -level paradigm, and developing lazy-evaluation techniques, caching mechanisms and early stopping criteria to speed up the partitioning process. Furthermore, we propose a sparsification algorithm based on locality-sensitive hashing that improves the running time for hypergraphs with large hyperedges, and show that incorporating global information about the community structure into the coarsening process improves quality. Moreover, we present a portfolio-based initial partitioning approach, and propose three refinement algorithms. Two are based on the Fiduccia-Mattheyses (FM) heuristic, but perform a highly localized search at each level. While one is designed for two-way partitioning, the other is the first FM-style algorithm that can be efficiently employed in the multi-level setting to directly improve -way partitions. The third algorithm uses max-flow computations on pairs of blocks to refine -way partitions. Finally, we present the first memetic multi-level hypergraph partitioning algorithm for an extensive exploration of the global solution space.
All contributions are made available through our open-source framework KaHyPar. In a comprehensive experimental study, we compare KaHyPar with hMETIS, PaToH, Mondriaan, Zoltan-AlgD, and HYPE on a wide range of hypergraphs from several application areas. Our results indicate that KaHyPar, already without the memetic component, computes better solutions than all competing algorithms for both the cut-net and the connectivity metric, while being faster than Zoltan-AlgD and equally fast as hMETIS. Moreover, KaHyPar compares favorably with the current best graph partitioning system KaFFPa - both in terms of solution quality and running time
Multilevel Combinatorial Optimization Across Quantum Architectures
Emerging quantum processors provide an opportunity to explore new approaches for solving traditional problems in the Post Moore\u27s law supercomputing era. However, the limited number of qubits makes it infeasible to tackle massive real-world datasets directly in the near future, leading to new challenges in utilizing these quantum processors for practical purposes. Hybrid quantum-classical algorithms that leverage both quantum and classical types of devices are considered as one of the main strategies to apply quantum computing to large-scale problems. In this paper, we advocate the use of multilevel frameworks for combinatorial optimization as a promising general paradigm for designing hybrid quantum-classical algorithms. In order to demonstrate this approach, we apply this method to two well-known combinatorial optimization problems, namely, the Graph Partitioning Problem, and the Community Detection Problem. We develop hybrid multilevel solvers with quantum local search on D-Wave\u27s quantum annealer and IBM\u27s gate-model based quantum processor. We carry out experiments on graphs that are orders of magnitudes larger than the current quantum hardware size and observe results comparable to state-of-the-art solvers
Multilevel Combinatorial Optimization Across Quantum Architectures
Emerging quantum processors provide an opportunity to explore new approaches
for solving traditional problems in the post Moore's law supercomputing era.
However, the limited number of qubits makes it infeasible to tackle massive
real-world datasets directly in the near future, leading to new challenges in
utilizing these quantum processors for practical purposes. Hybrid
quantum-classical algorithms that leverage both quantum and classical types of
devices are considered as one of the main strategies to apply quantum computing
to large-scale problems. In this paper, we advocate the use of multilevel
frameworks for combinatorial optimization as a promising general paradigm for
designing hybrid quantum-classical algorithms. In order to demonstrate this
approach, we apply this method to two well-known combinatorial optimization
problems, namely, the Graph Partitioning Problem, and the Community Detection
Problem. We develop hybrid multilevel solvers with quantum local search on
D-Wave's quantum annealer and IBM's gate-model based quantum processor. We
carry out experiments on graphs that are orders of magnitudes larger than the
current quantum hardware size, and we observe results comparable to
state-of-the-art solvers in terms of quality of the solution
Advances in knowledge discovery and data mining Part II
19th Pacific-Asia Conference, PAKDD 2015, Ho Chi Minh City, Vietnam, May 19-22, 2015, Proceedings, Part II</p
MS FT-2-2 7 Orthogonal polynomials and quadrature: Theory, computation, and applications
Quadrature rules find many applications in science and engineering. Their analysis is a classical area of applied mathematics and continues to attract considerable attention. This seminar brings together speakers with expertise in a large variety of quadrature rules. It is the aim of the seminar to provide an overview of recent developments in the analysis of quadrature rules. The computation of error estimates and novel applications also are described