9,591 research outputs found

    Learning Dynamic Generator Model by Alternating Back-Propagation Through Time

    Full text link
    This paper studies the dynamic generator model for spatial-temporal processes such as dynamic textures and action sequences in video data. In this model, each time frame of the video sequence is generated by a generator model, which is a non-linear transformation of a latent state vector, where the non-linear transformation is parametrized by a top-down neural network. The sequence of latent state vectors follows a non-linear auto-regressive model, where the state vector of the next frame is a non-linear transformation of the state vector of the current frame as well as an independent noise vector that provides randomness in the transition. The non-linear transformation of this transition model can be parametrized by a feedforward neural network. We show that this model can be learned by an alternating back-propagation through time algorithm that iteratively samples the noise vectors and updates the parameters in the transition model and the generator model. We show that our training method can learn realistic models for dynamic textures and action patterns.Comment: 10 page

    Deep neural learning based distributed predictive control for offshore wind farm using high fidelity LES data

    Get PDF
    The paper explores the deep neural learning (DNL) based predictive control approach for offshore wind farm using high fidelity large eddy simulations (LES) data. The DNL architecture is defined by combining the Long Short-Term Memory (LSTM) units with Convolutional Neural Networks (CNN) for feature extraction and prediction of the offshore wind farm. This hybrid CNN-LSTM model is developed based on the dynamic models of the wind farm and wind turbines as well as higher-fidelity LES data. Then, distributed and decentralized model predictive control (MPC) methods are developed based on the hybrid model for maximizing the wind farm power generation and minimizing the usage of the control commands. Extensive simulations based on a two-turbine and a nine-turbine wind farm cases demonstrate the high prediction accuracy (97% or more) of the trained CNN-LSTM models. They also show that the distributed MPC can achieve up to 38% increase in power generation at farm scale than the decentralized MPC. The computational time of the distributed MPC is around 0.7s at each time step, which is sufficiently fast as a real-time control solution to wind farm operations

    Dynamic Variational Autoencoders for Visual Process Modeling

    Full text link
    This work studies the problem of modeling visual processes by leveraging deep generative architectures for learning linear, Gaussian representations from observed sequences. We propose a joint learning framework, combining a vector autoregressive model and Variational Autoencoders. This results in an architecture that allows Variational Autoencoders to simultaneously learn a non-linear observation as well as a linear state model from sequences of frames. We validate our approach on artificial sequences and dynamic textures

    OOGAN: Disentangling GAN with One-Hot Sampling and Orthogonal Regularization

    Full text link
    Exploring the potential of GANs for unsupervised disentanglement learning, this paper proposes a novel GAN-based disentanglement framework with One-Hot Sampling and Orthogonal Regularization (OOGAN). While previous works mostly attempt to tackle disentanglement learning through VAE and seek to implicitly minimize the Total Correlation (TC) objective with various sorts of approximation methods, we show that GANs have a natural advantage in disentangling with an alternating latent variable (noise) sampling method that is straightforward and robust. Furthermore, we provide a brand-new perspective on designing the structure of the generator and discriminator, demonstrating that a minor structural change and an orthogonal regularization on model weights entails an improved disentanglement. Instead of experimenting on simple toy datasets, we conduct experiments on higher-resolution images and show that OOGAN greatly pushes the boundary of unsupervised disentanglement.Comment: AAAI 202
    corecore