85 research outputs found

    Multi-View Data Generation Without View Supervision

    Get PDF
    The development of high-dimensional generative models has recently gained a great surge of interest with the introduction of variational auto-encoders and generative adversarial neural networks. Different variants have been proposed where the underlying latent space is structured, for example, based on attributes describing the data to generate. We focus on a particular problem where one aims at generating samples corresponding to a number of objects under various views. We assume that the distribution of the data is driven by two independent latent factors: the content, which represents the intrinsic features of an object, and the view, which stands for the settings of a particular observation of that object. Therefore, we propose a generative model and a conditional variant built on such a disentangled latent space. This approach allows us to generate realistic samples corresponding to various objects in a high variety of views. Unlike many multi-view approaches, our model doesn't need any supervision on the views but only on the content. Compared to other conditional generation approaches that are mostly based on binary or categorical attributes, we make no such assumption about the factors of variations. Our model can be used on problems with a huge, potentially infinite, number of categories. We experiment it on four image datasets on which we demonstrate the effectiveness of the model and its ability to generalize.Comment: Published as a conference paper at ICLR 201

    Learning Model Structure from Data : an Application to On-Line Handwriting

    Get PDF
    We present a learning strategy for Hidden Markov Models that may be used to cluster handwriting sequences or to learn a character model by identifying its main writing styles. Our approach aims at learning both the structure and parameters of a Hidden Markov Model (HMM) from the data. A byproduct of this learning strategy is the ability to cluster signals and identify allograph. We provide experimental results on artificial data that demonstrate the possibility to learn from data HMM parameters and topology. For a given topology, our approach outperforms in some cases that we identify standard Maximum Likelihood learning scheme. We also apply our unsupervised learning scheme on on-line handwritten signals for allograph clustering as well as for learning HMM models for handwritten digit recognition

    Transferring Style in Motion Capture Sequences with Adversarial Learning

    Get PDF
    International audienceWe focus on style transfer for sequential data in a supervised setting. Assuming sequential data include both content and style information we want to learn models able to transform a sequence into another one with the same content information but with the style of another one, from a training dataset where content and style labels are available. Following works on image generation and edition with adversarial learning we explore the design of neural network architectures for the task of sequence edition that we apply to motion capture sequences

    Deepström: ´ Emulsion de noyaux et d'apprentissage profond

    Get PDF
    International audienceLes modèles à base de méthodes à noyaux et d'apprentissage profond ont essentiellement été étudiés séparemment jusqu'à aujourd'hui. Des travaux récents se sont focalisés sur la combinaison de ces deux approches afin de tirer parti du meilleur de chacune d'elles. Dans cette optique, nous introduisons une nouvelle architecture de réseaux de neurones qui bénéficie du faible coût en espace et en temps de l'approximation de Nyström. Nous montrons que cette architecture atteint une performance du niveau de l'état de l'art sur la classification d'images des jeux de données MNIST et CIFAR10 tout en ne nécessitant qu'un nombre réduit de paramètres

    Learning HMM Structure for On-line Handwriting Modelization

    No full text
    International audienceWe present a hidden Markov model-based approach to model on-line handwriting sequences. This problem is addressed in term of learning both hidden Markov models (HMM) structure and parameters from data. We iteratively simplify an initial HMM that consists in a mixture of as many left-right HMM as training sequences. There are two main applications of our approach: allograph identification and classification. We provide experimental results on these two different tasks

    Hybrid HMM and HCRF model for sequence classification

    No full text
    International audienceWe propose a hybrid model combining a generative model and a discriminative model for signal labelling and classification tasks, aiming at taking the best from each world. The idea is to focus the learning of the discriminative model on most likely state sequences as output by the generative model. This allows taking advantage of the usual increased accuracy of generative models on small training datasets and of discriminative models on large training datasets. We instantiate this framework with Hidden Markov Models and Hidden Conditional Random Fields. We validate our model on financial time series and on handwriting data

    A probabilistic prior knowledge integration method: Application to generative and discriminative models

    No full text
    International audienc

    An application of bayesian model averaging to histograms

    No full text
    National audienc

    Partitionnement de tracés manuscrits en ligne par modèles Markoviens

    No full text
    National audienc
    corecore