24,829 research outputs found

    Domain Generalization and Adaptation with Generative Modeling and Representation Learning

    Get PDF
    Despite the success of deep learning methods on object recognition tasks, one of the challenges deep learning systems face in the real world is the ability to perform well on the visually different data samples i.e. under a distribution shift caused by the samples of the same object category but from the significantly different visual domain. Many approaches have been proposed in both of these settings, however, not many other works focus on the generative modeling in this context, neither focus on studying the structure of hidden representations learned by the deep learning models. We hypothesize that learning the generative factors and studying the structures of features learned by the models can allow us to develop a new methodology for domain generalization and domain adaption setting. In this work, we propose a new methodology by designing a Variational Autoencoder (VAE) based model with structured three-part latent code representing specific aspects of data. We also make use of adversarial approaches to make the model robust towards changes in visual domains, to improve the domain generalization performance. For the domain adaptation, we make use of semi-supervised learning as a primary tool to adapt model parameters to learn the new data distribution of the target domain. We propose a novel variation of data augmentation used in semi-supervised methods based on latent code sampling. We also propose a new adversarial constraint for domain adaptation which does not require explicit information about the ’domain’ of the new data sample. From empirical evaluation, our method performs on par with the other state-of-the-art methods in domain generalization setting, while improving state-of-the-art for multiple datasets in the domain adaptation setting

    Transfer Learning for Speech and Language Processing

    Full text link
    Transfer learning is a vital technique that generalizes models trained for one setting or task to other settings or tasks. For example in speech recognition, an acoustic model trained for one language can be used to recognize speech in another language, with little or no re-training data. Transfer learning is closely related to multi-task learning (cross-lingual vs. multilingual), and is traditionally studied in the name of `model adaptation'. Recent advance in deep learning shows that transfer learning becomes much easier and more effective with high-level abstract features learned by deep models, and the `transfer' can be conducted not only between data distributions and data types, but also between model structures (e.g., shallow nets and deep nets) or even model types (e.g., Bayesian models and neural models). This review paper summarizes some recent prominent research towards this direction, particularly for speech and language processing. We also report some results from our group and highlight the potential of this very interesting research field.Comment: 13 pages, APSIPA 201
    • …
    corecore