3 research outputs found

    Computational Advantages of Deep Prototype-Based Learning

    Get PDF
    International audienceWe present a deep prototype-based learning architecture which achieves a performance that is competitive to a conventional, shallow prototype-based model but at a fraction of the computational cost, especially w.r.t. memory requirements. As prototype-based classification and regression methods are typically plagued by the exploding number of prototypes necessary to solve complex problems, this is an important step towards efficient prototype-based classification and regression. We demonstrate these claims by benchmarking our deep prototype-based model on the well-known MNIST dataset

    Using prototypes to improve convolutional networks interpretability

    Get PDF
    International audienceWe propose a method that allows the interpretation of the data representation obtained by CNN, through introducing prototypes in the feature space, that are later classified into a certain category. This way we can see how the feature space is structured in link with the categories and the related task

    Bio-inspired analysis of deep learning on not-so-big data using data-prototypes

    Get PDF
    International audienceDeep artificial neural networks are feed-forward architectures capable of very impressive performances in diverse domains. Indeed stacking multiple layers allows a hierarchical composition of local functions, providing efficient compact mappings. Compared to the brain, however, such architectures are closer to a single pipeline and require huge amounts of data, while concrete cases for either human or machine learning systems are often restricted to not-so-big data sets.Furthermore, interpretability of the obtained results is a key issue: since deep learning applications are increasingly present in society,it is important that the underlying processes be accessible and understandable to every one.In order to target these challenges, in this contribution we analyze how considering prototypes in a rather generalized sense (with respect to the state of the art)allows to reasonably work with small data sets while providing an interpretable view of the obtained results.Some mathematical interpretation of this proposal is discussed.Sensitivity to hyperparameters is a key issue for reproducible deep learning results, and is carefully considered in our methodology.Performances and limitations of the proposed setup are explored in details, under different hyperparameters sets, in an analogous way as biological experiments are conducted.We obtain a rather simple architecture, easy to explain, and which allows, combined with a standard method, to target both performances and interpretability
    corecore