16,016 research outputs found

    Pengaturan Kecepatan Motor Induksi Tanpa Sensor Kecepatan Dengan Metoda Direct Torque Control Menggunakan Observer Recurrent Neural Network

    Full text link
    This paper describes about development of sensorless control for three phase induction motor speed which is operated by Direct Torque Control (DTC). Induction motor speed is identified by an Observer. Current supply and Stator Voltage are ruquired by Observer to gain Motor Speed Estimation. Observer for motor speed identification is developed using Artificial Neural Network (ANN) Method and Recurrent Neural Network (RNN) learning algorithm. The simulation results using MathLab/Simulink show that on PI controller with Recurrent Neural Network (RNN) observer, there are the overshoot 7,0224%, rise time 0,0125 second and settling time 0,364 second with reference speed 77,9743 rad./sec

    Recurrent Neural Network Based Narrowband Channel Prediction

    No full text
    In this contribution, the application of fully connected recurrent neural networks (FCRNNs) is investigated in the context of narrowband channel prediction. Three different algorithms, namely the real time recurrent learning (RTRL), the global extended Kalman filter (GEKF) and the decoupled extended Kalman filter (DEKF) are used for training the recurrent neural network (RNN) based channel predictor. Our simulation results show that the GEKF and DEKF training schemes have the potential of converging faster than the RTRL training scheme as well as attaining a better MSE performance

    Deep recurrent neural networks for building energy prediction

    Get PDF
    posterThis poster illustrates the development of a deep recurrent neural network (RNN) model using long-short-term memory (LSTM) cells to predict energy consumption in buildings at one-hour time resolution over medium-to-long term time horizons ( greater than or equal to 1 week)

    Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network

    Full text link
    Because of their effectiveness in broad practical applications, LSTM networks have received a wealth of coverage in scientific journals, technical blogs, and implementation guides. However, in most articles, the inference formulas for the LSTM network and its parent, RNN, are stated axiomatically, while the training formulas are omitted altogether. In addition, the technique of "unrolling" an RNN is routinely presented without justification throughout the literature. The goal of this paper is to explain the essential RNN and LSTM fundamentals in a single document. Drawing from concepts in signal processing, we formally derive the canonical RNN formulation from differential equations. We then propose and prove a precise statement, which yields the RNN unrolling technique. We also review the difficulties with training the standard RNN and address them by transforming the RNN into the "Vanilla LSTM" network through a series of logical arguments. We provide all equations pertaining to the LSTM system together with detailed descriptions of its constituent entities. Albeit unconventional, our choice of notation and the method for presenting the LSTM system emphasizes ease of understanding. As part of the analysis, we identify new opportunities to enrich the LSTM system and incorporate these extensions into the Vanilla LSTM network, producing the most general LSTM variant to date. The target reader has already been exposed to RNNs and LSTM networks through numerous available resources and is open to an alternative pedagogical approach. A Machine Learning practitioner seeking guidance for implementing our new augmented LSTM model in software for experimentation and research will find the insights and derivations in this tutorial valuable as well.Comment: 43 pages, 10 figures, 78 reference
    corecore