135 research outputs found

    Distinguishing partitions of complete multipartite graphs

    Full text link
    A \textit{distinguishing partition} of a group XX with automorphism group aut(X){aut}(X) is a partition of XX that is fixed by no nontrivial element of aut(X){aut}(X). In the event that XX is a complete multipartite graph with its automorphism group, the existence of a distinguishing partition is equivalent to the existence of an asymmetric hypergraph with prescribed edge sizes. An asymptotic result is proven on the existence of a distinguishing partition when XX is a complete multipartite graph with m1m_1 parts of size n1n_1 and m2m_2 parts of size n2n_2 for small n1n_1, m2m_2 and large m1m_1, n2n_2. A key tool in making the estimate is counting the number of trees of particular classes

    Strongly Refuting Random CSPs Below the Spectral Threshold

    Full text link
    Random constraint satisfaction problems (CSPs) are known to exhibit threshold phenomena: given a uniformly random instance of a CSP with nn variables and mm clauses, there is a value of m=Ω(n)m = \Omega(n) beyond which the CSP will be unsatisfiable with high probability. Strong refutation is the problem of certifying that no variable assignment satisfies more than a constant fraction of clauses; this is the natural algorithmic problem in the unsatisfiable regime (when m/n=ω(1)m/n = \omega(1)). Intuitively, strong refutation should become easier as the clause density m/nm/n grows, because the contradictions introduced by the random clauses become more locally apparent. For CSPs such as kk-SAT and kk-XOR, there is a long-standing gap between the clause density at which efficient strong refutation algorithms are known, m/nO~(nk/21)m/n \ge \widetilde O(n^{k/2-1}), and the clause density at which instances become unsatisfiable with high probability, m/n=ω(1)m/n = \omega (1). In this paper, we give spectral and sum-of-squares algorithms for strongly refuting random kk-XOR instances with clause density m/nO~(n(k/21)(1δ))m/n \ge \widetilde O(n^{(k/2-1)(1-\delta)}) in time exp(O~(nδ))\exp(\widetilde O(n^{\delta})) or in O~(nδ)\widetilde O(n^{\delta}) rounds of the sum-of-squares hierarchy, for any δ[0,1)\delta \in [0,1) and any integer k3k \ge 3. Our algorithms provide a smooth transition between the clause density at which polynomial-time algorithms are known at δ=0\delta = 0, and brute-force refutation at the satisfiability threshold when δ=1\delta = 1. We also leverage our kk-XOR results to obtain strong refutation algorithms for SAT (or any other Boolean CSP) at similar clause densities. Our algorithms match the known sum-of-squares lower bounds due to Grigoriev and Schonebeck, up to logarithmic factors. Additionally, we extend our techniques to give new results for certifying upper bounds on the injective tensor norm of random tensors

    Multilevel Hypergraph Partitioning with Vertex Weights Revisited

    Get PDF
    The balanced hypergraph partitioning problem (HGP) is to partition the vertex set of a hypergraph into k disjoint blocks of bounded weight, while minimizing an objective function defined on the hyperedges. Whereas real-world applications often use vertex and edge weights to accurately model the underlying problem, the HGP research community commonly works with unweighted instances. In this paper, we argue that, in the presence of vertex weights, current balance constraint definitions either yield infeasible partitioning problems or allow unnecessarily large imbalances and propose a new definition that overcomes these problems. We show that state-of-the-art hypergraph partitioners often struggle considerably with weighted instances and tight balance constraints (even with our new balance definition). Thus, we present a recursive-bipartitioning technique that is able to reliably compute balanced (and hence feasible) solutions. The proposed method balances the partition by pre-assigning a small subset of the heaviest vertices to the two blocks of each bipartition (using an algorithm originally developed for the job scheduling problem) and optimizes the actual partitioning objective on the remaining vertices. We integrate our algorithm into the multilevel hypergraph partitioner KaHyPar and show that our approach is able to compute balanced partitions of high quality on a diverse set of benchmark instances

    High-Quality Hypergraph Partitioning

    Get PDF
    This dissertation focuses on computing high-quality solutions for the NP-hard balanced hypergraph partitioning problem: Given a hypergraph and an integer kk, partition its vertex set into kk disjoint blocks of bounded size, while minimizing an objective function over the hyperedges. Here, we consider the two most commonly used objectives: the cut-net metric and the connectivity metric. Since the problem is computationally intractable, heuristics are used in practice - the most prominent being the three-phase multi-level paradigm: During coarsening, the hypergraph is successively contracted to obtain a hierarchy of smaller instances. After applying an initial partitioning algorithm to the smallest hypergraph, contraction is undone and, at each level, refinement algorithms try to improve the current solution. With this work, we give a brief overview of the field and present several algorithmic improvements to the multi-level paradigm. Instead of using a logarithmic number of levels like traditional algorithms, we present two coarsening algorithms that create a hierarchy of (nearly) nn levels, where nn is the number of vertices. This makes consecutive levels as similar as possible and provides many opportunities for refinement algorithms to improve the partition. This approach is made feasible in practice by tailoring all algorithms and data structures to the nn-level paradigm, and developing lazy-evaluation techniques, caching mechanisms and early stopping criteria to speed up the partitioning process. Furthermore, we propose a sparsification algorithm based on locality-sensitive hashing that improves the running time for hypergraphs with large hyperedges, and show that incorporating global information about the community structure into the coarsening process improves quality. Moreover, we present a portfolio-based initial partitioning approach, and propose three refinement algorithms. Two are based on the Fiduccia-Mattheyses (FM) heuristic, but perform a highly localized search at each level. While one is designed for two-way partitioning, the other is the first FM-style algorithm that can be efficiently employed in the multi-level setting to directly improve kk-way partitions. The third algorithm uses max-flow computations on pairs of blocks to refine kk-way partitions. Finally, we present the first memetic multi-level hypergraph partitioning algorithm for an extensive exploration of the global solution space. All contributions are made available through our open-source framework KaHyPar. In a comprehensive experimental study, we compare KaHyPar with hMETIS, PaToH, Mondriaan, Zoltan-AlgD, and HYPE on a wide range of hypergraphs from several application areas. Our results indicate that KaHyPar, already without the memetic component, computes better solutions than all competing algorithms for both the cut-net and the connectivity metric, while being faster than Zoltan-AlgD and equally fast as hMETIS. Moreover, KaHyPar compares favorably with the current best graph partitioning system KaFFPa - both in terms of solution quality and running time

    On the method of typical bounded differences

    Full text link
    Concentration inequalities are fundamental tools in probabilistic combinatorics and theoretical computer science for proving that random functions are near their means. Of particular importance is the case where f(X) is a function of independent random variables X=(X_1, ..., X_n). Here the well known bounded differences inequality (also called McDiarmid's or Hoeffding-Azuma inequality) establishes sharp concentration if the function f does not depend too much on any of the variables. One attractive feature is that it relies on a very simple Lipschitz condition (L): it suffices to show that |f(X)-f(X')| \leq c_k whenever X,X' differ only in X_k. While this is easy to check, the main disadvantage is that it considers worst-case changes c_k, which often makes the resulting bounds too weak to be useful. In this paper we prove a variant of the bounded differences inequality which can be used to establish concentration of functions f(X) where (i) the typical changes are small although (ii) the worst case changes might be very large. One key aspect of this inequality is that it relies on a simple condition that (a) is easy to check and (b) coincides with heuristic considerations why concentration should hold. Indeed, given an event \Gamma that holds with very high probability, we essentially relax the Lipschitz condition (L) to situations where \Gamma occurs. The point is that the resulting typical changes c_k are often much smaller than the worst case ones. To illustrate its application we consider the reverse H-free process, where H is 2-balanced. We prove that the final number of edges in this process is concentrated, and also determine its likely value up to constant factors. This answers a question of Bollob\'as and Erd\H{o}s.Comment: 25 page

    Extremal and Ramsey Type Questions for Graphs and Ordered Graphs

    Get PDF
    In this thesis we study graphs and ordered graphs from an extremal point of view. In the first part of the thesis we prove that there are ordered forests H and ordered graphs of arbitrarily large chromatic number not containing such H as an ordered subgraph. In the second part we study pairs of graphs that have the same set of Ramsey graphs. We support a negative answer to the question whether there are pairs of non-isomorphic connected graphs that have this property. Finally we initiate the study of minimal ordered Ramsey graphs. For large families of ordered graphs we determine whether their members have finitely or infinitely many minimal ordered Ramsey graphs
    corecore