24 research outputs found

    A Statistical Model for Predicting Generalization in Few-Shot Classification

    Full text link
    The estimation of the generalization error of classifiers often relies on a validation set. Such a set is hardly available in few-shot learning scenarios, a highly disregarded shortcoming in the field. In these scenarios, it is common to rely on features extracted from pre-trained neural networks combined with distance-based classifiers such as nearest class mean. In this work, we introduce a Gaussian model of the feature distribution. By estimating the parameters of this model, we are able to predict the generalization error on new classification tasks with few samples. We observe that accurate distance estimates between class-conditional densities are the key to accurate estimates of the generalization performance. Therefore, we propose an unbiased estimator for these distances and integrate it in our numerical analysis. We show that our approach outperforms alternatives such as the leave-one-out cross-validation strategy in few-shot settings

    Traitement et apprentissage des réseaux de neurones profonds sur puce

    No full text
    In the field of machine learning, deep neural networks have become the inescapablereference for a very large number of problems. These systems are made of an assembly of layers,performing elementary operations, and using a large number of tunable variables. Using dataavailable during a learning phase, these variables are adjusted such that the neural networkaddresses the given task. It is then possible to process new data.To achieve state-of-the-art performance, in many cases these methods rely on a very largenumber of parameters, and thus large memory and computational costs. Therefore, they are oftennot very adapted to a hardware implementation on constrained resources systems. Moreover, thelearning process requires to reuse the training data several times, making it difficult to adapt toscenarios where new information appears on the fly.In this thesis, we are first interested in methods allowing to reduce the impact of computations andmemory required by deep neural networks. Secondly, we propose techniques for learning on thefly, in an embedded context.Dans le domaine de l'apprentissage machine, les réseaux de neurones profonds sont devenus la référence incontournable pour un très grand nombre de problèmes. Ces systèmes sont constitués par un assemblage de couches, lesquelles réalisent des traitements élémentaires, paramétrés par un grand nombre de variables. À l'aide de données disponibles pendant une phase d'apprentissage, ces variables sont ajustées de façon à ce que le réseau de neurones réponde à la tâche donnée. Il est ensuite possible de traiter de nouvelles données. Si ces méthodes atteignent les performances à l'état de l'art dans bien des cas, ils reposent pour cela sur un très grand nombre de paramètres, et donc des complexités en mémoire et en calculs importantes. De fait, ils sont souvent peu adaptés à l'implémentation matérielle sur des systèmes contraints en ressources. Par ailleurs, l'apprentissage requiert de repasser sur les données d'entraînement plusieurs fois, et s'adapte donc difficilement à des scénarios où de nouvelles informations apparaissent au fil de l'eau. Dans cette thèse, nous nous intéressons dans un premier temps aux méthodes permettant de réduire l'impact en calculs et en mémoire des réseaux de neurones profonds. Nous proposons dans un second temps des techniques permettant d'effectuer l'apprentissage au fil de l'eau, dans un contexte embarqué

    The New Small Wheel

    No full text
    The E-Link is the part that has to be added to the firmware in order to allow communication and configuration through the E-Link, which is a serial link. In this step we have to integrate the E-Link, so the data must be encoded using 8b/10b encoding and serialized before send it out through the E-link. We have to use the Ethernet FIFO as the source of data, which will be used to communicate with the L1DDC (Level-1 Data Driver Card)

    Sparse Clustered Neural Networks for the Assignment Problem

    No full text
    International audienceThe linear assignment problem is a fundamental com-binatorial problem and a classical linear programming problem. It consists of assigning agents to tasks on a one-to-one basis, while minimizing thetotal assignment cost. The assignmentproblem appears recurrently in major applications involvingoptimal decision-making. However, the use of classical solvingmethods for large size problems is increasingly prohibitive, as itrequires high computation and processing cost. In this paper, abiologically inspired algorithm using an artificial neural network(ANN) is proposed. The artificial neural network model involvedin this contribution is a sparse clustered neural network (SCN), which is a generalization of the Palm-Wilshaw neural network. The presented algorithm provides a lower complexity compared to the classically used Hungarian algorithm and allows parallelcomputation at the cost of a fair approximation of the optimalassignment. Illustrative applications through practical examplesare given for analysis and evaluation purpose

    ThriftyNets: Convolutional Neural Networks with Tiny Parameter Budget

    No full text
    Deep Neural Networks are state-of-the-art in a large number of challenges in machine learning. However, to reach the best performance they require a huge pool of parameters. Indeed, typical deep convolutional architectures present an increasing number of feature maps as we go deeper in the network, whereas spatial resolution of inputs is decreased through downsampling operations. This means that most of the parameters lay in the final layers, while a large portion of the computations are performed by a small fraction of the total parameters in the first layers. In an effort to use every parameter of a network at its maximum, we propose a new convolutional neural network architecture, called ThriftyNet. In ThriftyNet, only one convolutional layer is defined and used recursively, leading to a maximal parameter factorization. In complement, normalization, non-linearities, downsamplings and shortcut ensure sufficient expressivity of the model. ThriftyNet achieves competitive performance on a tiny parameters budget, exceeding 91% accuracy on CIFAR-10 with less than 40 k parameters in total, 74.3% on CIFAR-100 with less than 600 k parameters, and 67.1% On ImageNet ILSVRC 2012 with no more than 4.15 M parameters. However, the proposed method typically requires more computations than existing counterparts

    Quantized Guided Pruning for Efficient Hardware Implementations of Deep Neural Networks

    No full text
    International audienc

    Finding All Matches in a Database using Binary Neural Networks

    No full text
    International audienceThe most efficient architectures of associative memories are based on binary neural networks. As example, Sparse Clustered Networks (SCNs) are able to achieve almost optimal memory efficiency while providing robust indexation of pieces of information through cliques in a neural network. In the canonical formulation of the associative memory problem, the unique stored message matching a given input probe is to be retrieved. In this paper, we focus on the more general problem of finding all messages matching the given probe. We consider real datasets from which many different messages can match given probes, which cannot be done with uniformly distributed messages due to their unlikelyhood of sharing large common parts with one another. Namely, we implement a crossword dictionary containing 8-letter english words, and a chess endgame dataset using associative memories based on binary neural networks. We explain how to adapt SCNs' architecture to this challenging dataset and introduce a backtracking procedure to retrieve all completions of the given input. We stress the performance of the proposed method using different measures and discuss the importance of parameters
    corecore