65 research outputs found

    Weak and strong fillability of higher dimensional contact manifolds

    Full text link
    For contact manifolds in dimension three, the notions of weak and strong symplectic fillability and tightness are all known to be inequivalent. We extend these facts to higher dimensions: in particular, we define a natural generalization of weak fillings and prove that it is indeed weaker (at least in dimension five),while also being obstructed by all known manifestations of "overtwistedness". We also find the first examples of contact manifolds in all dimensions that are not symplectically fillable but also cannot be called overtwisted in any reasonable sense. These depend on a higher-dimensional analogue of Giroux torsion, which we define via the existence in all dimensions of exact symplectic manifolds with disconnected contact boundary.Comment: 68 pages, 5 figures. v2: Some attributions clarified, and other minor edits. v3: exposition improved using referee's comments. Published by Invent. Mat

    Families of contact 3-manifolds with arbitrarily large Stein fillings

    Full text link
    We show that there are vast families of contact 3-manifolds each member of which admits infinitely many Stein fillings with arbitrarily big euler characteristics and arbitrarily small signatures ---which disproves a conjecture of Stipsicz and Ozbagci. To produce our examples, we set a framework which generalizes the construction of Stein structures on allowable Lefschetz fibrations over the 2-disk to those over any orientable base surface, along with the construction of contact structures via open books on 3-manifolds to spinal open books introduced in [24].Comment: 36 pages, 9 figures, with an appendix by Samuel Lisi and Chris Wend

    The Singular Value Decomposition, Applications and Beyond

    Full text link
    The singular value decomposition (SVD) is not only a classical theory in matrix computation and analysis, but also is a powerful tool in machine learning and modern data analysis. In this tutorial we first study the basic notion of SVD and then show the central role of SVD in matrices. Using majorization theory, we consider variational principles of singular values and eigenvalues. Built on SVD and a theory of symmetric gauge functions, we discuss unitarily invariant norms, which are then used to formulate general results for matrix low rank approximation. We study the subdifferentials of unitarily invariant norms. These results would be potentially useful in many machine learning problems such as matrix completion and matrix data classification. Finally, we discuss matrix low rank approximation and its recent developments such as randomized SVD, approximate matrix multiplication, CUR decomposition, and Nystrom approximation. Randomized algorithms are important approaches to large scale SVD as well as fast matrix computations

    Estimating quantum chromatic numbers

    Get PDF
    We develop further the new versions of quantum chromatic numbers of graphs introduced by the first and fourth authors. We prove that the problem of computation of the commuting quantum chromatic number of a graph is solvable by an SDP algorithm and describe an hierarchy of variants of the commuting quantum chromatic number which converge to it. We introduce the tracial rank of a graph, a parameter that gives a lower bound for the commuting quantum chromatic number and parallels the projective rank, and prove that it is multiplicative. We describe the tracial rank, the projective rank and the fractional chromatic numbers in a unified manner that clarifies their connection with the commuting quantum chromatic number, the quantum chromatic number and the classical chromatic number, respectively. Finally, we present a new SDP algorithm that yields a parameter larger than the Lov\'asz number and is yet a lower bound for the tracial rank of the graph. We determine the precise value of the tracial rank of an odd cycle.Comment: 34 pages; v2 has improved presentation based after referees' comments, published versio

    Subgroup discovery for structured target concepts

    Get PDF
    The main object of study in this thesis is subgroup discovery, a theoretical framework for finding subgroups in data—i.e., named sub-populations— whose behaviour with respect to a specified target concept is exceptional when compared to the rest of the dataset. This is a powerful tool that conveys crucial information to a human audience, but despite past advances has been limited to simple target concepts. In this work we propose algorithms that bring this framework to novel application domains. We introduce the concept of representative subgroups, which we use not only to ensure the fairness of a sub-population with regard to a sensitive trait, such as race or gender, but also to go beyond known trends in the data. For entities with additional relational information that can be encoded as a graph, we introduce a novel measure of robust connectedness which improves on established alternative measures of density; we then provide a method that uses this measure to discover which named sub-populations are more well-connected. Our contributions within subgroup discovery crescent with the introduction of kernelised subgroup discovery: a novel framework that enables the discovery of subgroups on i.i.d. target concepts with virtually any kind of structure. Importantly, our framework additionally provides a concrete and efficient tool that works out-of-the-box without any modification, apart from specifying the Gramian of a positive definite kernel. To use within kernelised subgroup discovery, but also on any other kind of kernel method, we additionally introduce a novel random walk graph kernel. Our kernel allows the fine tuning of the alignment between the vertices of the two compared graphs, during the count of the random walks, while we also propose meaningful structure-aware vertex labels to utilise this new capability. With these contributions we thoroughly extend the applicability of subgroup discovery and ultimately re-define it as a kernel method.Der Hauptgegenstand dieser Arbeit ist die Subgruppenentdeckung (Subgroup Discovery), ein theoretischer Rahmen für das Auffinden von Subgruppen in Daten—d. h. benannte Teilpopulationen—deren Verhalten in Bezug auf ein bestimmtes Targetkonzept im Vergleich zum Rest des Datensatzes außergewöhnlich ist. Es handelt sich hierbei um ein leistungsfähiges Instrument, das einem menschlichen Publikum wichtige Informationen vermittelt. Allerdings ist es trotz bisherigen Fortschritte auf einfache Targetkonzepte beschränkt. In dieser Arbeit schlagen wir Algorithmen vor, die diesen Rahmen auf neuartige Anwendungsbereiche übertragen. Wir führen das Konzept der repräsentativen Untergruppen ein, mit dem wir nicht nur die Fairness einer Teilpopulation in Bezug auf ein sensibles Merkmal wie Rasse oder Geschlecht sicherstellen, sondern auch über bekannte Trends in den Daten hinausgehen können. Für Entitäten mit zusätzlicher relationalen Information, die als Graph kodiert werden kann, führen wir ein neuartiges Maß für robuste Verbundenheit ein, das die etablierten alternativen Dichtemaße verbessert; anschließend stellen wir eine Methode bereit, die dieses Maß verwendet, um herauszufinden, welche benannte Teilpopulationen besser verbunden sind. Unsere Beiträge in diesem Rahmen gipfeln in der Einführung der kernelisierten Subgruppenentdeckung: ein neuartiger Rahmen, der die Entdeckung von Subgruppen für u.i.v. Targetkonzepten mit praktisch jeder Art von Struktur ermöglicht. Wichtigerweise, unser Rahmen bereitstellt zusätzlich ein konkretes und effizientes Werkzeug, das ohne jegliche Modifikation funktioniert, abgesehen von der Angabe des Gramian eines positiv definitiven Kernels. Für den Einsatz innerhalb der kernelisierten Subgruppentdeckung, aber auch für jede andere Art von Kernel-Methode, führen wir zusätzlich einen neuartigen Random-Walk-Graph-Kernel ein. Unser Kernel ermöglicht die Feinabstimmung der Ausrichtung zwischen den Eckpunkten der beiden unter-Vergleich-gestelltenen Graphen während der Zählung der Random Walks, während wir auch sinnvolle strukturbewusste Vertex-Labels vorschlagen, um diese neue Fähigkeit zu nutzen. Mit diesen Beiträgen erweitern wir die Anwendbarkeit der Subgruppentdeckung gründlich und definieren wir sie im Endeffekt als Kernel-Methode neu
    • …
    corecore