4 research outputs found

    Prediction of Electrical Energy Consumption Using LSTM Algorithm with Teacher Forcing Technique

    Get PDF
    Electrical energy is an important foundation in world economic growth, therefore it requires an accurate prediction in predicting energy consumption in the future. The methods that are often used in previous research are the Time Series and Machine Learning methods, but recently there has been a new method that can predict energy consumption using the Deep Learning Method which can process data quickly for training and testing. In this research, the researcher proposes a model and algorithm which contained in Deep Learning, that is Multivariate Time Series Model with LSTM Algorithm and using Teacher Forcing Technique for predicting electrical energy consumption in the future. Because Multivariate Time Series Model and LSTM Algorithm can receive input with various conditions or seasons of electrical energy consumption. Teacher Forcing Technique is able lighten up the computation so that it can training and testing data quickly. The method used in this study is to compare Teacher Forcing LSTM with Non-Teacher Forcing LSTM in Multivariate Time Series model using several activation functions that produce significant differences. TF value of RMSE 0.006, MAE 0.070 and Non-TF has RMSE and MAE values of 0.117 and 0.246. The value of the two models is obtained from Sigmoid Activation and the worst value of the two models is in the Softmax activation function, with TF values is RMSE 0.423, MAE 0.485 and Non-TF RMSE 0.520, MAE 0.519.

    Language Modelling for Sound Event Detection with Teacher Forcing and Scheduled Sampling

    Get PDF
    International audienceA sound event detection (SED) method typically takes as an input a sequence of audio frames and predicts the activities of sound events in each frame. In real-life recordings, the sound events exhibit some temporal structure: for instance, a "car horn" will likely be followed by a "car passing by". While this temporal structure is widely exploited in sequence prediction tasks (e.g., in machine translation), where language models (LM) are exploited, it is not satisfactorily modeled in SED. In this work we propose a method which allows a recurrent neural network (RNN) to learn an LM for the SED task. The method conditions the input of the RNN with the activities of classes at the previous time step. We evaluate our method using F1 score and error rate (ER) over three different and publicly available datasets; the TUT-SED Synthetic 2016 and the TUT Sound Events 2016 and 2017 datasets. The obtained results show an increase of 9% and 2% at the F1 (higher is better) and a decrease of 7% and 2% at ER (lower is better) for the TUT Sound Events 2016 and 2017 datasets, respectively, when using our method. On the contrary, with our method there is a decrease of 4% at F1 score and an increase of 7% at ER for the TUT-SED Synthetic 2016 dataset
    corecore