521 research outputs found

    Deep Latent Variable Model for Longitudinal Group Factor Analysis

    Full text link
    In many scientific problems such as video surveillance, modern genomic analysis, and clinical studies, data are often collected from diverse domains across time that exhibit time-dependent heterogeneous properties. It is important to not only integrate data from multiple sources (called multiview data), but also to incorporate time dependency for deep understanding of the underlying system. Latent factor models are popular tools for exploring multi-view data. However, it is frequently observed that these models do not perform well for complex systems and they are not applicable to time-series data. Therefore, we propose a generative model based on variational autoencoder and recurrent neural network to infer the latent dynamic factors for multivariate timeseries data. This approach allows us to identify the disentangled latent embeddings across multiple modalities while accounting for the time factor. We invoke our proposed model for analyzing three datasets on which we demonstrate the effectiveness and the interpretability of the model

    Unsupervised Controllable Generation with Self-Training

    Get PDF
    Recent generative adversarial networks (GANs) are able to generate impressive photo-realistic images. However, controllable generation with GANs remains a challenging research problem. Achieving controllable generation requires semantically interpretable and disentangled factors of variation. It is challenging to achieve this goal using simple fixed distributions such as Gaussian distribution. Instead, we propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training. Self-training provides an iterative feedback in the GAN training, from the discriminator to the generator, and progressively improves the proposal of the latent codes as training proceeds. The latent codes are sampled from a latent variable model that is learned in the feature space of the discriminator. We consider a normalized independent component analysis model and learn its parameters through tensor factorization of the higher-order moments. Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder, and is able to discover semantically meaningful latent codes without any supervision. We demonstrate empirically on both cars and faces datasets that each group of elements in the learned code controls a mode of variation with a semantic meaning, e.g. pose or background change. We also demonstrate with quantitative metrics that our method generates better results compared to other approaches

    Unsupervised Controllable Generation with Self-Training

    Get PDF
    Recent generative adversarial networks (GANs) are able to generate impressive photo-realistic images. However, controllable generation with GANs remains a challenging research problem. Achieving controllable generation requires semantically interpretable and disentangled factors of variation. It is challenging to achieve this goal using simple fixed distributions such as Gaussian distribution. Instead, we propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training. Self-training provides an iterative feedback in the GAN training, from the discriminator to the generator, and progressively improves the proposal of the latent codes as training proceeds. The latent codes are sampled from a latent variable model that is learned in the feature space of the discriminator. We consider a normalized independent component analysis model and learn its parameters through tensor factorization of the higher-order moments. Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder, and is able to discover semantically meaningful latent codes without any supervision. We demonstrate empirically on both cars and faces datasets that each group of elements in the learned code controls a mode of variation with a semantic meaning, e.g. pose or background change. We also demonstrate with quantitative metrics that our method generates better results compared to other approaches

    Physically Interpretable Feature Learning and Inverse Design of Supercritical Airfoils

    Full text link
    Machine-learning models have demonstrated a great ability to learn complex patterns and make predictions. In high-dimensional nonlinear problems of fluid dynamics, data representation often greatly affects the performance and interpretability of machine learning algorithms. With the increasing application of machine learning in fluid dynamics studies, the need for physically explainable models continues to grow. This paper proposes a feature learning algorithm based on variational autoencoders, which is able to assign physical features to some latent variables of the variational autoencoder. In addition, it is theoretically proved that the remaining latent variables are independent of the physical features. The proposed algorithm is trained to include shock wave features in its latent variables for the reconstruction of supercritical pressure distributions. The reconstruction accuracy and physical interpretability are also compared with those of other variational autoencoders. Then, the proposed algorithm is used for the inverse design of supercritical airfoils, which enables the generation of airfoil geometries based on physical features rather than the complete pressure distributions. It also demonstrates the ability to manipulate certain pressure distribution features of the airfoil without changing the others

    Factorized Variational Autoencoders for Modeling Audience Reactions to Movies

    Get PDF
    Matrix and tensor factorization methods are often used for finding underlying low-dimensional patterns from noisy data. In this paper, we study non-linear tensor factorization methods based on deep variational autoencoders. Our approach is well-suited for settings where the relationship between the latent representation to be learned and the raw data representation is highly complex. We apply our approach to a large dataset of facial expressions of movie-watching audiences (over 16 million faces). Our experiments show that compared to conventional linear factorization methods, our method achieves better reconstruction of the data, and further discovers interpretable latent factors
    • …
    corecore