19 research outputs found

    Beyond topological persistence: Starting from networks

    Full text link
    Persistent homology enables fast and computable comparison of topological objects. However, it is naturally limited to the analysis of topological spaces. We extend the theory of persistence, by guaranteeing robustness and computability to significant data types as simple graphs and quivers. We focus on categorical persistence functions that allow us to study in full generality strong kinds of connectedness such as clique communities, kk-vertex and kk-edge connectedness directly on simple graphs and monic coherent categories.Comment: arXiv admin note: text overlap with arXiv:1707.0967

    Parametric machines: a fresh approach to architecture search

    Full text link
    Using tools from category theory, we provide a framework where artificial neural networks, and their architectures, can be formally described. We first define the notion of machine in a general categorical context, and show how simple machines can be combined into more complex ones. We explore finite- and infinite-depth machines, which generalize neural networks and neural ordinary differential equations. Borrowing ideas from functional analysis and kernel methods, we build complete, normed, infinite-dimensional spaces of machines, and discuss how to find optimal architectures and parameters -- within those spaces -- to solve a given computational problem. In our numerical experiments, these kernel-inspired networks can outperform classical neural networks when the training dataset is small.Comment: 31 pages, 4 figure

    Steady and ranging sets in graph persistence

    Full text link
    Generalised persistence functions (gp-functions) are defined on (R,≤)(\mathbb{R}, \le)-indexed diagrams in a given category. A sufficient condition for stability is also introduced. In the category of graphs, a standard way of producing gp-functions is proposed: steady and ranging sets for a given feature. The example of steady and ranging hubs is studied in depth; their meaning is investigated in three concrete networks

    Persistence-based operators in machine learning

    Full text link
    Artificial neural networks can learn complex, salient data features to achieve a given task. On the opposite end of the spectrum, mathematically grounded methods such as topological data analysis allow users to design analysis pipelines fully aware of data constraints and symmetries. We introduce a class of persistence-based neural network layers. Persistence-based layers allow the users to easily inject knowledge about symmetries (equivariance) respected by the data, are equipped with learnable weights, and can be composed with state-of-the-art neural architectures

    Towards a topological-geometrical theory of group equivariant non-expansive operators for data analysis and machine learning

    Get PDF
    The aim of this paper is to provide a general mathematical framework for group equivariance in the machine learning context. The framework builds on a synergy between persistent homology and the theory of group actions. We define group-equivariant non-expansive operators (GENEOs), which are maps between function spaces associated with groups of transformations. We study the topological and metric properties of the space of GENEOs to evaluate their approximating power and set the basis for general strategies to initialise and compose operators. We begin by defining suitable pseudo-metrics for the function spaces, the equivariance groups, and the set of non-expansive operators. Basing on these pseudo-metrics, we prove that the space of GENEOs is compact and convex, under the assumption that the function spaces are compact and convex. These results provide fundamental guarantees in a machine learning perspective. We show examples on the MNIST and fashion-MNIST datasets. By considering isometry-equivariant non-expansive operators, we describe a simple strategy to select and sample operators, and show how the selected and sampled operators can be used to perform both classical metric learning and an effective initialisation of the kernels of a convolutional neural network.Comment: Added references. Extended Section 7. Added 3 figures. Corrected typos. 42 pages, 7 figure
    corecore