37 research outputs found

    Decomposition of multiple packings with subquadratic union complexity

    Get PDF
    Suppose kk is a positive integer and X\mathcal{X} is a kk-fold packing of the plane by infinitely many arc-connected compact sets, which means that every point of the plane belongs to at most kk sets. Suppose there is a function f(n)=o(n2)f(n)=o(n^2) with the property that any nn members of X\mathcal{X} determine at most f(n)f(n) holes, which means that the complement of their union has at most f(n)f(n) bounded connected components. We use tools from extremal graph theory and the topological Helly theorem to prove that X\mathcal{X} can be decomposed into at most pp (11-fold) packings, where pp is a constant depending only on kk and ff.Comment: Small generalization of the main result, improvements in the proofs, minor correction

    Note on the number of edges in families with linear union-complexity

    Full text link
    We give a simple argument showing that the number of edges in the intersection graph GG of a family of nn sets in the plane with a linear union-complexity is O(ω(G)n)O(\omega(G)n). In particular, we prove χ(G)≀col(G)<19ω(G)\chi(G)\leq \text{col}(G)< 19\omega(G) for intersection graph GG of a family of pseudo-discs, which improves a previous bound.Comment: background and related work is now more complete; presentation improve

    Dependent k-Set Packing on Polynomoids

    Get PDF
    Specialized hereditary systems, e.g., matroids, are known to have many applications in algorithm design. We define a new notion called d-polynomoid as a hereditary system (E, ? ? 2^E) so that every two maximal sets in ? have less than d elements in common. We study the problem that, given a d-polynomoid (E, ?), asks if the ground set E contains ? disjoint k-subsets that are not in ?, and obtain a complexity trichotomy result for all pairs of k ? 1 and d ? 0. Our algorithmic result yields a sufficient and necessary condition that decides whether each hypergraph in some classes of r-uniform hypergraphs has a perfect matching, which has a number of algorithmic applications

    Graph and Hypergraph Decompositions for Exact Algorithms

    Get PDF
    This thesis studies exact exponential and fixed-parameter algorithms for hard graph and hypergraph problems. Specifically, we study two techniques that can be used in the development of such algorithms: (i) combinatorial decompositions of both the input instance and the solution, and (ii) evaluation of multilinear forms over semirings. In the first part of the thesis we develop new algorithms for graph and hypergraph problems based on techniques (i) and (ii). While these techniques are independently both useful, the work presented in this part is largely characterised by their joint application. That is, combining results from different pieces of the decompositions often takes the from of multilinear form evaluation task, and on the other hand, decompositions offer the basic structure for dynamic-programming-style algorithms for the evaluation of multilinear forms. As main positive results of the first part, we give algorithms for three different problem families. First, we give a fast evaluation algorithm for linear forms defined by a disjointness matrix of small sets. This can be applied to obtain faster algorithms for counting maximum-weight objects of small size, such as k-paths in graphs. Second, we give a general framework for exponential-time algorithms for finding maximum-weight subgraphs of bounded tree-width, based on the theory of tree decompositions. Besides basic combinatorial problems, this framework has applications in learning Bayesian network structures. Third, we give a fixed-parameter algorithm for finding unbalanced vertex cuts, that is, vertex cuts that separate a small number of vertices from the rest of the graph. In the second part of the thesis we consider aspects of the complexity theory of linear forms over semirings, in order to better understand technique (ii). Specifically, we study how the presence of different algebraic catalysts in the ground semiring affects the complexity. As the main result, we show that there are linear forms that are easy to compute over semirings with idempotent addition, but difficult to compute over rings, unless the strong exponential time hypothesis fails.Yksi tietojenkĂ€sittelytieteen perustavista tavoitteista on tehokkaiden algoritmien kehittĂ€minen. Teoreettisesta nĂ€kökulmasta algoritmia yleensĂ€ pidetÀÀn tehokkaana mikĂ€li sen ajoaika riippuu polynomisesti syötteen koosta. On kuitenkin laskennallisia ongelmia, joihin ei ole olemassa polynomiaikaisia algoritmeja. Esimerkiksi NP-kovia ongelmia ei voi ratkaista polynomisessa ajassa, mikĂ€li yleinen vaativuusolettamus P ≠ NP pitÀÀ paikkansa. TĂ€stĂ€ huolimatta haluaisimme kuitenkin usein ratkaista tĂ€llaisia vaikeita ongelmia. Kaksi yleistĂ€ lĂ€hestymistapaa vaikeiden, polynomisessa ajassa ratkeamattomien ongelmien tarkkaan ratkaisemiseen on (i) eksponentiaalinen algoritmiikka ja (ii) parametrisoitu algoritmiikka. Eksponentiaaliaikaisessa algoritmiikassa kehitetÀÀn algoritmeja, joiden ajoaika on edelleen eksponentiaalinen syötteen koon suhteen, mutta jotka vĂ€lttĂ€vĂ€t koko ratkaisuavaruuden lĂ€pikĂ€ynnin; toisin sanoen, kyse on vĂ€hemmĂ€n eksponentiaalisten algoritmien kehittĂ€misestĂ€. Parametrisoitu algoritmiikka puolestaan pyrkii eristĂ€mÀÀn eksponentiaaliaikaisen riippuvuuden ajoajassa syötteen koosta riippumattomaan parametriin. TĂ€ssĂ€ vĂ€itöstyössĂ€ esitetÀÀn eksponentiaaliaikaisia ja parametrisoituja algoritmeja erinĂ€isten vaikeiden verkko- ja hyperverkko-ongelmien tarkkaan ratkaisemiseen. Esitetyt algoritmit perustuvat kahteen algoritmiseen tekniikkaan: (i) monilineaarimuotojen evaluoiminen yli erilaisten puolirengaiden ja (ii) kombinatoristen hajotelmien kĂ€yttö. Algoritmien lisĂ€ksi työssĂ€ tarkastellaan nĂ€ihin tekniikoihin liittyviĂ€ vaativuusteoreettisia kysymyksiĂ€, mikĂ€ auttaa ymmĂ€rtĂ€mÀÀn tekniikoiden rajoituksia ja toistaiseksi hyödyntĂ€mĂ€ttömiĂ€ mahdollisuuksia

    LIPIcs, Volume 258, SoCG 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 258, SoCG 2023, Complete Volum

    Geometric optimization problems in quantum computation and discrete mathematics: Stabilizer states and lattices

    Get PDF
    This thesis consists of two parts: Part I deals with properties of stabilizer states and their convex hull, the stabilizer polytope. Stabilizer states, Pauli measurements and Clifford unitaries are the three building blocks of the stabilizer formalism whose computational power is limited by the Gottesman- Knill theorem. This model is usually enriched by a magic state to get a universal model for quantum computation, referred to as quantum computation with magic states (QCM). The first part of this thesis will investigate the role of stabilizer states within QCM from three different angles. The first considered quantity is the stabilizer extent, which provides a tool to measure the non-stabilizerness or magic of a quantum state. It assigns a quantity to each state roughly measuring how many stabilizer states are required to approximate the state. It has been shown that the extent is multiplicative under taking tensor products when the considered state is a product state whose components are composed of maximally three qubits. In Chapter 2, we will prove that this property does not hold in general, more precisely, that the stabilizer extent is strictly submultiplicative. We obtain this result as a consequence of rather general properties of stabilizer states. Informally our result implies that one should not expect a dictionary to be multiplicative under taking tensor products whenever the dictionary size grows subexponentially in the dimension. In Chapter 3, we consider QCM from a resource theoretic perspective. The resource theory of magic is based on two types of quantum channels, completely stabilizer preserving maps and stabilizer operations. Both classes have the property that they cannot generate additional magic resources. We will show that these two classes of quantum channels do not coincide, specifically, that stabilizer operations are a strict subset of the set of completely stabilizer preserving channels. This might have the consequence that certain tasks which are usually realized by stabilizer operations could in principle be performed better by completely stabilizer preserving maps. In Chapter 4, the last one of Part I, we consider QCM via the polar dual stabilizer polytope (also called the Lambda-polytope). This polytope is a superset of the quantum state space and every quantum state can be written as a convex combination of its vertices. A way to classically simulate quantum computing with magic states is based on simulating Pauli measurements and Clifford unitaries on the vertices of the  Lambda-polytope. The complexity of classical simulation with respect to the polytope   is determined by classically simulating the updates of vertices under Clifford unitaries and Pauli measurements. However, a complete description of this polytope as a convex hull of its vertices is only known in low dimensions (for up to two qubits or one qudit when odd dimensional systems are considered). We make progress on this question by characterizing a certain class of operators that live on the boundary of the  Lambda-polytope when the underlying dimension is an odd prime. This class encompasses for instance Wigner operators, which have been shown to be vertices of  Lambda. We conjecture that this class contains even more vertices of  Lambda. Eventually, we will shortly sketch why applying Clifford unitaries and Pauli measurements to this class of operators can be efficiently classically simulated. Part II of this thesis deals with lattices. Lattices are discrete subgroups of the Euclidean space. They occur in various different areas of mathematics, physics and computer science. We will investigate two types of optimization problems related to lattices. In Chapter 6 we are concerned with optimization within the space of lattices. That is, we want to compare the Gaussian potential energy of different lattices. To make the energy of lattices comparable we focus on lattices with point density one. In particular, we focus on even unimodular lattices and show that, up to dimension 24, they are all critical for the Gaussian potential energy. Furthermore, we find that all n-dimensional even unimodular lattices with n   24 are local minima or saddle points. In contrast in dimension 32, there are even unimodular lattices which are local maxima and others which are not even critical. In Chapter 7 we consider flat tori R^n/L, where L is an n-dimensional lattice. A flat torus comes with a metric and our goal is to approximate this metric with a Hilbert space metric. To achieve this, we derive an infinite-dimensional semidefinite optimization program that computes the least distortion embedding of the metric space R^n/L into a Hilbert space. This program allows us to make several interesting statements about the nature of least distortion embeddings of flat tori. In particular, we give a simple proof for a lower bound which gives a constant factor improvement over the previously best lower bound on the minimal distortion of an embedding of an n-dimensional flat torus. Furthermore, we show that there is always an optimal embedding into a finite-dimensional Hilbert space. Finally, we construct optimal least distortion embeddings for the standard torus R^n/Z^n and all 2-dimensional flat tori

    Maximum Matchings in Geometric Intersection Graphs

    Get PDF
    Let G be an intersection graph of n geometric objects in the plane. We show that a maximum matching in G can be found in O(ρ3ω/2nω/2) time with high probability, where ρ is the density of the geometric objects and ω>2 is a constant such that n×n matrices can be multiplied in O(nω) time. The same result holds for any subgraph of G, as long as a geometric representation is at hand. For this, we combine algebraic methods, namely computing the rank of a matrix via Gaussian elimination, with the fact that geometric intersection graphs have small separators. We also show that in many interesting cases, the maximum matching problem in a general geometric intersection graph can be reduced to the case of bounded density. In particular, a maximum matching in the intersection graph of any family of translates of a convex object in the plane can be found in O(nω/2) time with high probability, and a maximum matching in the intersection graph of a family of planar disks with radii in [1,Κ] can be found in O(Κ6log11n+Κ12ωnω/2) time with high probability

    Massively Parallel Algorithms for Small Subgraph Counting

    Get PDF

    Combinatorics, Probability and Computing

    Get PDF
    The main theme of this workshop was the use of probabilistic methods in combinatorics and theoretical computer science. Although these methods have been around for decades, they are being refined all the time: they are getting more and more sophisticated and powerful. Another theme was the study of random combinatorial structures, either for their own sake, or to tackle extremal questions. The workshop also emphasized connections between probabilistic combinatorics and discrete probability
    corecore