9,314 research outputs found

    Deep neural learning based distributed predictive control for offshore wind farm using high fidelity LES data

    Get PDF
    The paper explores the deep neural learning (DNL) based predictive control approach for offshore wind farm using high fidelity large eddy simulations (LES) data. The DNL architecture is defined by combining the Long Short-Term Memory (LSTM) units with Convolutional Neural Networks (CNN) for feature extraction and prediction of the offshore wind farm. This hybrid CNN-LSTM model is developed based on the dynamic models of the wind farm and wind turbines as well as higher-fidelity LES data. Then, distributed and decentralized model predictive control (MPC) methods are developed based on the hybrid model for maximizing the wind farm power generation and minimizing the usage of the control commands. Extensive simulations based on a two-turbine and a nine-turbine wind farm cases demonstrate the high prediction accuracy (97% or more) of the trained CNN-LSTM models. They also show that the distributed MPC can achieve up to 38% increase in power generation at farm scale than the decentralized MPC. The computational time of the distributed MPC is around 0.7s at each time step, which is sufficiently fast as a real-time control solution to wind farm operations

    Dynamic Variational Autoencoders for Visual Process Modeling

    Full text link
    This work studies the problem of modeling visual processes by leveraging deep generative architectures for learning linear, Gaussian representations from observed sequences. We propose a joint learning framework, combining a vector autoregressive model and Variational Autoencoders. This results in an architecture that allows Variational Autoencoders to simultaneously learn a non-linear observation as well as a linear state model from sequences of frames. We validate our approach on artificial sequences and dynamic textures

    OOGAN: Disentangling GAN with One-Hot Sampling and Orthogonal Regularization

    Full text link
    Exploring the potential of GANs for unsupervised disentanglement learning, this paper proposes a novel GAN-based disentanglement framework with One-Hot Sampling and Orthogonal Regularization (OOGAN). While previous works mostly attempt to tackle disentanglement learning through VAE and seek to implicitly minimize the Total Correlation (TC) objective with various sorts of approximation methods, we show that GANs have a natural advantage in disentangling with an alternating latent variable (noise) sampling method that is straightforward and robust. Furthermore, we provide a brand-new perspective on designing the structure of the generator and discriminator, demonstrating that a minor structural change and an orthogonal regularization on model weights entails an improved disentanglement. Instead of experimenting on simple toy datasets, we conduct experiments on higher-resolution images and show that OOGAN greatly pushes the boundary of unsupervised disentanglement.Comment: AAAI 202
    corecore