134 research outputs found

    Sparsity in Variational Autoencoders

    Full text link
    Working in high-dimensional latent spaces, the internal encoding of data in Variational Autoencoders becomes naturally sparse. We discuss this known but controversial phenomenon sometimes refereed to as overpruning, to emphasize the under-use of the model capacity. In fact, it is an important form of self-regularization, with all the typical benefits associated with sparsity: it forces the model to focus on the really important features, highly reducing the risk of overfitting. Especially, it is a major methodological guide for the correct tuning of the model capacity, progressively augmenting it to attain sparsity, or conversely reducing the dimension of the network removing links to zeroed out neurons. The degree of sparsity crucially depends on the network architecture: for instance, convolutional networks typically show less sparsity, likely due to the tighter relation of features to different spatial regions of the input.Comment: An Extended Abstract of this survey will be presented at the 1st International Conference on Advances in Signal Processing and Artificial Intelligence (ASPAI' 2019), 20-22 March 2019, Barcelona, Spai

    Smart matching

    Full text link
    One of the most annoying aspects in the formalization of mathematics is the need of transforming notions to match a given, existing result. This kind of transformations, often based on a conspicuous background knowledge in the given scientific domain (mostly expressed in the form of equalities or isomorphisms), are usually implicit in the mathematical discourse, and it would be highly desirable to obtain a similar behavior in interactive provers. The paper describes the superposition-based implementation of this feature inside the Matita interactive theorem prover, focusing in particular on the so called smart application tactic, supporting smart matching between a goal and a given result.Comment: To appear in The 9th International Conference on Mathematical Knowledge Management: MKM 201

    Variational Autoencoders and the Variable Collapse Phenomenon

    Get PDF
    In Variational Autoencoders, when working in high-dimensional latent spaces, there is a natural collapse of latent variables with minor significance, that get altogether neglected by the generator. We discuss this known but controversial phenomenon, sometimes referred to as overpruning, to emphasize the under-use of the model capacity. In fact, it is an important form of self-regularization, with all the typical benefits associated with sparsity: it forces the model to focus on the really important features, enhancing their disentanglement and reducing the risk of overfitting. In this article, we discuss the issue, surveying past works, and particularly focusing on the exploitation of the variable collapse phenomenon as a methodological guideline for the correct tuning of the model capacity, and of the loss function parameters

    Comparing the latent space of generative models

    Full text link
    Different encodings of datapoints in the latent space of latent-vector generative models may result in more or less effective and disentangled characterizations of the different explanatory factors of variation behind the data. Many works have been recently devoted to the explorationof the latent space of specific models, mostly focused on the study of how features are disentangled and of how trajectories producing desired alterations of data in the visible space can be found. In this work we address the more general problem of comparing the latent spaces of different models, looking for transformations between them. We confined the investigation to the familiar and largely investigated case of generative models for the data manifold of human faces. The surprising, preliminary result reported in this article is that (provided models have not been taught or explicitly conceived to act differently) a simple linear mapping is enough to pass from a latent space to another while preserving most of the information

    A Web Interface for Matita

    Full text link
    This article describes a prototype implementation of a web interface for the Matita proof assistant. The interface supports all basic functionalities of the local Gtk interface, but takes advantage of the markup to enrich the document with several kinds of annotations or active elements. Annotations may have both a presentational/hypertextual nature, aimed to improve the quality of the proof script as a human readable document, or a more semantic nature, aimed to help the system in its processing of the script. The latter kind comprises information automatically generated by the proof assistant during previous compilations, and stored to improve the performance of re-executing expensive operations like disambiguation or automation

    Dissecting FLOPs Along Input Dimensions for GreenAI Cost Estimations

    Get PDF
    The term GreenAI refers to a novel approach to Deep Learning, that is more aware of the ecological impact and the computational efficiency of its methods. The promoters of GreenAI suggested the use of Floating Point Operations (FLOPs) as a measure of the computational cost of Neural Networks; however, that measure does not correlate well with the energy consumption of hardware equipped with massively parallel processing units like GPUs or TPUs. In this article, we propose a simple refinement of the formula used to compute floating point operations for convolutional layers, called alpha-FLOPs, explaining and correcting the traditional discrepancy with respect to different layers, and closer to reality. The notion of alpha-FLOPs relies on the crucial insight that, in case of inputs with multiple dimensions, there is no reason to believe that the speedup offered by parallelism will be uniform along all different axes
    • …
    corecore