443 research outputs found

    Groupoids and Faà di Bruno Formulae for green functions in bialgebras of trees

    Get PDF
    We prove a Faa di Bruno formula for the Green function in the bialgebra of P-trees, for any polynomial endofunctor P. The formula appears as relative homotopy cardinality of an equivalence of groupoids. For suitable choices of P, the result implies also formulae for Green functions in bialgebras of graphsPreprin

    Polynomial functors and trees

    Get PDF
    We explore the relationship between polynomial functors and trees. In the first part we characterise trees as certain polynomial functors and obtain a completely formal but at the same time conceptual and explicit construction of two categories of rooted trees, whose main properties we describe in terms of some factorisation systems. The second category is the category Ω of Moerdijk and Weiss. Although the constructions are motivated and explained in terms of polynomial functors, they all amount to elementary manipulations with finite sets. Included in Part 1 is also an explicit construction of the free monad on a polynomial endofunctor, given in terms of trees. In the second part we describe polynomial endofunctors and monads as structures built from trees, characterising the images of several nerve functors from polynomial endofunctors and monads into presheaves on categories of trees. Polynomial endofunctors and monads over a base are characterised by a sheaf condition on categories of decorated trees. In the absolute case, one further condition is needed, a projectivity condition, which serves also to characterise polynomial endofunctors and monads among (coloured) collections and operads

    Learning Neural Graph Representations in Non-Euclidean Geometries

    Get PDF
    The success of Deep Learning methods is heavily dependent on the choice of the data representation. For that reason, much of the actual effort goes into Representation Learning, which seeks to design preprocessing pipelines and data transformations that can support effective learning algorithms. The aim of Representation Learning is to facilitate the task of extracting useful information for classifiers and other predictor models. In this regard, graphs arise as a convenient data structure that serves as an intermediary representation in a wide range of problems. The predominant approach to work with graphs has been to embed them in an Euclidean space, due to the power and simplicity of this geometry. Nevertheless, data in many domains exhibit non-Euclidean features, making embeddings into Riemannian manifolds with a richer structure necessary. The choice of a metric space where to embed the data imposes a geometric inductive bias, with a direct impact on the performance of the models. This thesis is about learning neural graph representations in non-Euclidean geometries and showcasing their applicability in different downstream tasks. We introduce a toolkit formed by different graph metrics with the goal of characterizing the topology of the data. In that way, we can choose a suitable target embedding space aligned to the shape of the dataset. By virtue of the geometric inductive bias provided by the structure of the non-Euclidean manifolds, neural models can achieve higher performances with a reduced parameter footprint. As a first step, we study graphs with hierarchical structures. We develop different techniques to derive hierarchical graphs from large label inventories. Noticing the capacity of hyperbolic spaces to represent tree-like arrangements, we incorporate this information into an NLP model through hyperbolic graph embeddings and showcase the higher performance that they enable. Second, we tackle the question of how to learn hierarchical representations suited for different downstream tasks. We introduce a model that jointly learns task-specific graph embeddings from a label inventory and performs classification in hyperbolic space. The model achieves state-of-the-art results on very fine-grained labels, with a remarkable reduction of the parameter size. Next, we move to matrix manifolds to work on graphs with diverse structures and properties. We propose a general framework to implement the mathematical tools required to learn graph embeddings on symmetric spaces. These spaces are of particular interest given that they have a compound geometry that simultaneously contains Euclidean as well as hyperbolic subspaces, allowing them to automatically adapt to dissimilar features in the graph. We demonstrate a concrete implementation of the framework on Siegel spaces, showcasing their versatility on different tasks. Finally, we focus on multi-relational graphs. We devise the means to translate Euclidean and hyperbolic multi-relational graph embedding models into the space of symmetric positive definite (SPD) matrices. To do so we develop gyrocalculus in this geometry and integrate it with the aforementioned framework

    Quasi-Isometry Invariance of Group Splittings over Coarse Poincar\'e Duality Groups

    Get PDF
    We show that if GG is a group of type FPn+1Z2FP_{n+1}^{\mathbb{Z}_2} that is coarsely separated into three essential, coarse disjoint, coarse complementary components by a coarse PDnZ2PD_n^{\mathbb{Z}_2} space W,W, then WW is at finite Hausdorff distance from a subgroup HH of GG; moreover, GG splits over a subgroup commensurable to a subgroup of HH. We use this to deduce that splittings of the form G=A∗HBG=A*_HB, where GG is of type FPn+1Z2FP_{n+1}^{\mathbb{Z}_2} and HH is a coarse PDnZ2PD_n^{\mathbb{Z}_2} group such that both ∣CommA(H):H∣|\mathrm{Comm}_A(H): H| and ∣CommB(H):H∣|\mathrm{Comm}_B(H): H| are greater than two, are invariant under quasi-isometry.Comment: 46 page

    Hyperbolic Deep Neural Networks: A Survey

    Full text link
    Recently, there has been a rising surge of momentum for deep representation learning in hyperbolic spaces due to theirhigh capacity of modeling data like knowledge graphs or synonym hierarchies, possessing hierarchical structure. We refer to the model as hyperbolic deep neural network in this paper. Such a hyperbolic neural architecture potentially leads to drastically compact model withmuch more physical interpretability than its counterpart in Euclidean space. To stimulate future research, this paper presents acoherent and comprehensive review of the literature around the neural components in the construction of hyperbolic deep neuralnetworks, as well as the generalization of the leading deep approaches to the Hyperbolic space. It also presents current applicationsaround various machine learning tasks on several publicly available datasets, together with insightful observations and identifying openquestions and promising future directions
    • …
    corecore