9,915 research outputs found

    A dynamic latent variable model for source separation

    Get PDF
    We propose a novel latent variable model for learning latent bases for time-varying non-negative data. Our model uses a mixture multinomial as the likelihood function and proposes a Dirichlet distribution with dynamic parameters as a prior, which we call the dynamic Dirichlet prior. An expectation maximization (EM) algorithm is developed for estimating the parameters of the proposed model. Furthermore, we connect our proposed dynamic Dirichlet latent variable model (dynamic DLVM) to the two popular latent basis learning methods - probabilistic latent component analysis (PLCA) and non-negative matrix factorization (NMF). We show that (i) PLCA is a special case of the dynamic DLVM, and (ii) dynamic DLVM can be interpreted as a dynamic version of NMF. The effectiveness of the proposed model is demonstrated through extensive experiments on speaker source separation, and speech-noise separation. In both cases, our method performs better than relevant and competitive baselines. For speaker separation, dynamic DLVM shows 1.38 dB improvement in terms of source to interference ratio, and 1 dB improvement in source to artifact ratio

    Latent Variable Model for Multi-modal Translation

    Get PDF
    In this work, we propose to model the interaction between visual and textual features for multi-modal neural machine translation (MMT) through a latent variable model. This latent variable can be seen as a multi-modal stochastic embedding of an image and its description in a foreign language. It is used in a target-language decoder and also to predict image features. Importantly, our model formulation utilises visual and textual inputs during training but does not require that images be available at test time. We show that our latent variable MMT formulation improves considerably over strong baselines, including a multi-task learning approach (Elliott and K\'ad\'ar, 2017) and a conditional variational auto-encoder approach (Toyama et al., 2016). Finally, we show improvements due to (i) predicting image features in addition to only conditioning on them, (ii) imposing a constraint on the minimum amount of information encoded in the latent variable, and (iii) by training on additional target-language image descriptions (i.e. synthetic data).Comment: Paper accepted at ACL 2019. Contains 8 pages (11 including references, 13 including appendix), 6 figure

    Dirichlet latent variable model : a dynamic model based on Dirichlet prior for audio processing

    Get PDF
    We propose a dynamic latent variable model for learning latent bases from time varying, non-negative data. We take a probabilistic approach to modeling the temporal dependence in data by introducing a dynamic Dirichlet prior – a Dirichlet distribution with dynamic parameters. This new distribution allows us to assure non-negativity and avoid intractability when sequential updates are performed (otherwise encountered in using Dirichlet prior). We refer to the proposed model as the Dirichlet latent variable model (DLVM). We develop an expectation maximization algorithm for the proposed model, and also derive a maximum a posteriori estimate of the parameters. Furthermore, we connect the proposed DLVM to two popular latent basis learning methods - probabilistic latent component analysis (PLCA) and non-negative matrix factorization (NMF).We show that (i) PLCA is a special case of our DLVM, and (ii) DLVM can be interpreted as a dynamic version of NMF. The usefulness of DLVM is demonstrated for three audio processing applications - speaker source separation, denoising, and bandwidth expansion. To this end, a new algorithm for source separation is also proposed. Through extensive experiments on benchmark databases, we show that the proposed model out performs several relevant existing methods in all three applications

    Rethinking Recurrent Latent Variable Model for Music Composition

    Full text link
    We present a model for capturing musical features and creating novel sequences of music, called the Convolutional Variational Recurrent Neural Network. To generate sequential data, the model uses an encoder-decoder architecture with latent probabilistic connections to capture the hidden structure of music. Using the sequence-to-sequence model, our generative model can exploit samples from a prior distribution and generate a longer sequence of music. We compare the performance of our proposed model with other types of Neural Networks using the criteria of Information Rate that is implemented by Variable Markov Oracle, a method that allows statistical characterization of musical information dynamics and detection of motifs in a song. Our results suggest that the proposed model has a better statistical resemblance to the musical structure of the training data, which improves the creation of new sequences of music in the style of the originals.Comment: Published as a conference paper at IEEE MMSP 201

    A Latent Variable Model of Quality Determination

    Get PDF
    Despite substantial interest in the determination of quality, there has been little empirical work in the area. The problem, of course, is the general lack of data on quality. This paper overcomes the data problem by constructing a Multiple Indicator Multiple Cause (MIMIC) model of quality determination. We present a one-factor MIMIC model of quality which derives natural indicators out of the relationship between input demand and output determination. The indicators turn out to be input demands which have been filtered to remove variation due to all factors, except quality ana random disturbances. These indicators are measures of input investment in each unit of output or the volume (intensity) of service. The model is identified by defining input demand to be a function of quantity and "total effective output" (quantity times average quality), instead of quantity and average quality. The model is then applied to the determination of nursing home quality. The model appears to perform quite well, as the results generally conform with economic theory and restrictions implied by the MIMIC structure are accepted in hypothesis tests.
    corecore