1,823 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Clones over Finite Sets and Minor Conditions

    Get PDF
    Achieving a classification of all clones of operations over a finite set is one of the goals at the heart of universal algebra. In 1921 Post provided a full description of the lattice of all clones over a two-element set. However, over the following years, it has been shown that a similar classification seems arduously reachable even if we only focus on clones over three-element sets: in 1959 Janov and Mučnik proved that there exists a continuum of clones over a k-element set for every k > 2. Subsequent research in universal algebra therefore focused on understanding particular aspects of clone lattices over finite domains. Remarkable results in this direction are the description of maximal and minimal clones. One might still hope to classify all operation clones on finite domains up to some equivalence relation so that equivalent clones share many of the properties that are of interest in universal algebra. In a recent turn of events, a weakening of the notion of clone homomorphism was introduced: a minor-preserving map from a clone C to D is a map which preserves arities and composition with projections. The minor-equivalence relation on clones over finite sets gained importance both in universal algebra and in computer science: minor-equivalent clones satisfy the same set identities of the form f(x_1,...,x_n) = g(y_1,...,y_m), also known as minor-identities. Moreover, it was proved that the complexity of the CSP of a finite structure A only depends on the set of minor-identities satisfied by the polymorphism clone of A. Throughout this dissertation we focus on the poset that arises by considering clones over a three-element set with the following order: we write C ≀_{m} D if there exist a minor-preserving map from C to D. It has been proved that ≀_{m} is a preorder; we call the poset arising from ≀_{m} the pp-constructability poset. We initiate a systematic study of the pp-constructability poset. To this end, we distinguish two cases that are qualitatively distinct: when considering clones over a finite set A, one can either set a boundary on the cardinality of A, or not. We denote by P_n the pp-constructability poset restricted to clones over a set A such that |A|=n and by P_{fin} we denote the whole pp-constructability poset, i.e., we only require A to be finite. First, we prove that P_{fin} is a semilattice and that it has no atoms. Moreover, we provide a complete description of P_2 and describe a significant part of P_3: we prove that P_3 has exactly three submaximal elements and present a full description of the ideal generated by one of these submaximal elements. As a byproduct, we prove that there are only countably many clones of self-dual operations over {0,1,2} up to minor-equivalence

    Planar Disjoint Paths, Treewidth, and Kernels

    Full text link
    In the Planar Disjoint Paths problem, one is given an undirected planar graph with a set of kk vertex pairs (si,ti)(s_i,t_i) and the task is to find kk pairwise vertex-disjoint paths such that the ii-th path connects sis_i to tit_i. We study the problem through the lens of kernelization, aiming at efficiently reducing the input size in terms of a parameter. We show that Planar Disjoint Paths does not admit a polynomial kernel when parameterized by kk unless coNP ⊆\subseteq NP/poly, resolving an open problem by [Bodlaender, Thomass{\'e}, Yeo, ESA'09]. Moreover, we rule out the existence of a polynomial Turing kernel unless the WK-hierarchy collapses. Our reduction carries over to the setting of edge-disjoint paths, where the kernelization status remained open even in general graphs. On the positive side, we present a polynomial kernel for Planar Disjoint Paths parameterized by k+twk + tw, where twtw denotes the treewidth of the input graph. As a consequence of both our results, we rule out the possibility of a polynomial-time (Turing) treewidth reduction to tw=kO(1)tw= k^{O(1)} under the same assumptions. To the best of our knowledge, this is the first hardness result of this kind. Finally, combining our kernel with the known techniques [Adler, Kolliopoulos, Krause, Lokshtanov, Saurabh, Thilikos, JCTB'17; Schrijver, SICOMP'94] yields an alternative (and arguably simpler) proof that Planar Disjoint Paths can be solved in time 2O(k2)⋅nO(1)2^{O(k^2)}\cdot n^{O(1)}, matching the result of [Lokshtanov, Misra, Pilipczuk, Saurabh, Zehavi, STOC'20].Comment: To appear at FOCS'23, 82 pages, 30 figure

    Efficient parameterized algorithms on structured graphs

    Get PDF
    In der klassischen KomplexitĂ€tstheorie werden worst-case Laufzeiten von Algorithmen typischerweise einzig abhĂ€ngig von der EingabegrĂ¶ĂŸe angegeben. In dem Kontext der parametrisierten KomplexitĂ€tstheorie versucht man die Analyse der Laufzeit dahingehend zu verfeinern, dass man zusĂ€tzlich zu der EingabengrĂ¶ĂŸe noch einen Parameter berĂŒcksichtigt, welcher angibt, wie strukturiert die Eingabe bezĂŒglich einer gewissen Eigenschaft ist. Ein parametrisierter Algorithmus nutzt dann diese beschriebene Struktur aus und erreicht so eine Laufzeit, welche schneller ist als die eines besten unparametrisierten Algorithmus, falls der Parameter klein ist. Der erste Hauptteil dieser Arbeit fĂŒhrt die Forschung in diese Richtung weiter aus und untersucht den Einfluss von verschieden Parametern auf die Laufzeit von bekannten effizient lösbaren Problemen. Einige vorgestellte Algorithmen sind dabei adaptive Algorithmen, was bedeutet, dass die Laufzeit von diesen Algorithmen mit der Laufzeit des besten unparametrisierten Algorithm fĂŒr den grĂ¶ĂŸtmöglichen Parameterwert ĂŒbereinstimmt und damit theoretisch niemals schlechter als die besten unparametrisierten Algorithmen und ĂŒbertreffen diese bereits fĂŒr leicht nichttriviale Parameterwerte. Motiviert durch den allgemeinen Erfolg und der Vielzahl solcher parametrisierten Algorithmen, welche eine vielzahl verschiedener Strukturen ausnutzen, untersuchen wir im zweiten Hauptteil dieser Arbeit, wie man solche unterschiedliche homogene Strukturen zu mehr heterogenen Strukturen vereinen kann. Ausgehend von algebraischen AusdrĂŒcken, welche benutzt werden können, um von Parametern beschriebene Strukturen zu definieren, charakterisieren wir klar und robust heterogene Strukturen und zeigen exemplarisch, wie sich die Parameter tree-depth und modular-width heterogen verbinden lassen. Wir beschreiben dazu effiziente Algorithmen auf heterogenen Strukturen mit Laufzeiten, welche im Spezialfall mit den homogenen Algorithmen ĂŒbereinstimmen.In classical complexity theory, the worst-case running times of algorithms depend solely on the size of the input. In parameterized complexity the goal is to refine the analysis of the running time of an algorithm by additionally considering a parameter that measures some kind of structure in the input. A parameterized algorithm then utilizes the structure described by the parameter and achieves a running time that is faster than the best general (unparameterized) algorithm for instances of low parameter value. In the first part of this thesis, we carry forward in this direction and investigate the influence of several parameters on the running times of well-known tractable problems. Several presented algorithms are adaptive algorithms, meaning that they match the running time of a best unparameterized algorithm for worst-case parameter values. Thus, an adaptive parameterized algorithm is asymptotically never worse than the best unparameterized algorithm, while it outperforms the best general algorithm already for slightly non-trivial parameter values. As illustrated in the first part of this thesis, for many problems there exist efficient parameterized algorithms regarding multiple parameters, each describing a different kind of structure. In the second part of this thesis, we explore how to combine such homogeneous structures to more general and heterogeneous structures. Using algebraic expressions, we define new combined graph classes of heterogeneous structure in a clean and robust way, and we showcase this for the heterogeneous merge of the parameters tree-depth and modular-width, by presenting parameterized algorithms on such heterogeneous graph classes and getting running times that match the homogeneous cases throughout

    The Potts model and the independence polynomial:Uniqueness of the Gibbs measure and distributions of complex zeros

    Get PDF
    Part 1 of this dissertation studies the antiferromagnetic Potts model, which originates in statistical physics. In particular the transition from multiple Gibbs measures to a unique Gibbs measure for the antiferromagnetic Potts model on the infinite regular tree is studied. This is called a uniqueness phase transition. A folklore conjecture about the parameter at which the uniqueness phase transition occurs is partly confirmed. The proof uses a geometric condition, which comes from analysing an associated dynamical system.Part 2 of this dissertation concerns zeros of the independence polynomial. The independence polynomial originates in statistical physics as the partition function of the hard-core model. The location of the complex zeros of the independence polynomial is related to phase transitions in terms of the analycity of the free energy and plays an important role in the design of efficient algorithms to approximately compute evaluations of the independence polynomial. Chapter 5 directly relates the location of the complex zeros of the independence polynomial to computational hardness of approximating evaluations of the independence polynomial. This is done by moreover relating the set of zeros of the independence polynomial to chaotic behaviour of a naturally associated family of rational functions; the occupation ratios. Chapter 6 studies boundedness of zeros of the independence polynomial of tori for sequences of tori converging to the integer lattice. It is shown that zeros are bounded for sequences of balanced tori, but unbounded for sequences of highly unbalanced tori

    Behavior quantification as the missing link between fields: Tools for digital psychiatry and their role in the future of neurobiology

    Full text link
    The great behavioral heterogeneity observed between individuals with the same psychiatric disorder and even within one individual over time complicates both clinical practice and biomedical research. However, modern technologies are an exciting opportunity to improve behavioral characterization. Existing psychiatry methods that are qualitative or unscalable, such as patient surveys or clinical interviews, can now be collected at a greater capacity and analyzed to produce new quantitative measures. Furthermore, recent capabilities for continuous collection of passive sensor streams, such as phone GPS or smartwatch accelerometer, open avenues of novel questioning that were previously entirely unrealistic. Their temporally dense nature enables a cohesive study of real-time neural and behavioral signals. To develop comprehensive neurobiological models of psychiatric disease, it will be critical to first develop strong methods for behavioral quantification. There is huge potential in what can theoretically be captured by current technologies, but this in itself presents a large computational challenge -- one that will necessitate new data processing tools, new machine learning techniques, and ultimately a shift in how interdisciplinary work is conducted. In my thesis, I detail research projects that take different perspectives on digital psychiatry, subsequently tying ideas together with a concluding discussion on the future of the field. I also provide software infrastructure where relevant, with extensive documentation. Major contributions include scientific arguments and proof of concept results for daily free-form audio journals as an underappreciated psychiatry research datatype, as well as novel stability theorems and pilot empirical success for a proposed multi-area recurrent neural network architecture.Comment: PhD thesis cop

    Parameterized Graph Modification Beyond the Natural Parameter

    Get PDF

    Sum-of-squares representations for copositive matrices and independent sets in graphs

    Get PDF
    A polynomial optimization problem asks for minimizing a polynomial function (cost) given a set of constraints (rules) represented by polynomial inequalities and equations. Many hard problems in combinatorial optimization and applications in operations research can be naturally encoded as polynomial optimization problems. A common approach for addressing such computationally hard problems is by considering variations of the original problem that give an approximate solution, and that can be solved efficiently. One such approach for attacking hard combinatorial problems and, more generally, polynomial optimization problems, is given by the so-called sum-of-squares approximations. This thesis focuses on studying whether these approximations find the optimal solution of the original problem.We investigate this question in two main settings: 1) Copositive programs and 2) parameters dealing with independent sets in graphs. Among our main new results, we characterize the matrix sizes for which sum-of-squares approximations are able to capture all copositive matrices. In addition, we show finite convergence of the sums-of-squares approximations for maximum independent sets in graphs based on their continuous copositive reformulations. We also study sum-of-squares approximations for parameters asking for maximum balanced independent sets in bipartite graphs. In particular, we find connections with the LovĂĄsz theta number and we design eigenvalue bounds for several related parameters when the graphs satisfy some symmetry properties.<br/

    A Structural Approach to the Design of Domain Specific Neural Network Architectures

    Full text link
    This is a master's thesis concerning the theoretical ideas of geometric deep learning. Geometric deep learning aims to provide a structured characterization of neural network architectures, specifically focused on the ideas of invariance and equivariance of data with respect to given transformations. This thesis aims to provide a theoretical evaluation of geometric deep learning, compiling theoretical results that characterize the properties of invariant neural networks with respect to learning performance.Comment: 94 pages and 16 Figures Upload of my Master's thesis. Not peer reviewed and potentially contains error
    • 

    corecore