128,358 research outputs found

    Deep Learning with Long Short-Term Memory for Time Series Prediction

    Full text link
    Time series prediction can be generalized as a process that extracts useful information from historical records and then determines future values. Learning long-range dependencies that are embedded in time series is often an obstacle for most algorithms, whereas Long Short-Term Memory (LSTM) solutions, as a specific kind of scheme in deep learning, promise to effectively overcome the problem. In this article, we first give a brief introduction to the structure and forward propagation mechanism of the LSTM model. Then, aiming at reducing the considerable computing cost of LSTM, we put forward the Random Connectivity LSTM (RCLSTM) model and test it by predicting traffic and user mobility in telecommunication networks. Compared to LSTM, RCLSTM is formed via stochastic connectivity between neurons, which achieves a significant breakthrough in the architecture formation of neural networks. In this way, the RCLSTM model exhibits a certain level of sparsity, which leads to an appealing decrease in the computational complexity and makes the RCLSTM model become more applicable in latency-stringent application scenarios. In the field of telecommunication networks, the prediction of traffic series and mobility traces could directly benefit from this improvement as we further demonstrate that the prediction accuracy of RCLSTM is comparable to that of the conventional LSTM no matter how we change the number of training samples or the length of input sequences.Comment: 9 pages, 5 figures, 14 reference

    Flood prediction using deep learning models

    Get PDF
    Deep learning has recently appeared as one of the best reliable approaches for forecasting time series. Even though there are numerous data-driven models for flood prediction, most studies focus on prediction using a single flood variable. The creation of various data-driven models may require unfeasible computing resources when estimating multiple flood variables. Furthermore, the trends of several flood variables can only be revealed by analysing long-term historical observations, which conventional data-driven models do not adequately support. This study proposed a time series model with layer normalization and Leaky ReLU activation function in multivariable long-term short memory (LSTM), bidirectional long-term short memory (BILSTM) and deep recurrent neural network (DRNN). The proposed models were trained and evaluated by using the sensory historical data of river water level and rainfall in the east coast state of Malaysia. It were then, compared to the other six deep learning models. In terms of prediction accuracy, the experimental results also demonstrated that the deep recurrent neural network model with layer normalization and Leaky ReLU activation function performed better than other models

    Long-Short Term Memory for an Effective Short-Term Weather Forecasting Model Using Surface Weather Data

    Get PDF
    Part 7: Deep Learning - Convolutional ANNInternational audienceNumerical Weather Prediction (NWP) requires considerable computer power to solve complex mathematical equations to obtain a forecast based on current weather conditions. In this article, we propose a lightweight data-driven weather forecasting model by exploring state-of-the-art deep learning techniques based on Artificial Neural Network (ANN). Weather information is captured by time-series data and thus, we explore the latest Long Short-Term Memory (LSTM) layered model, which is a specialised form of Recurrent Neural Network (RNN) for weather prediction. The aim of this research is to develop and evaluate a short-term weather forecasting model using the LSTM and evaluate the accuracy compared to the well-established Weather Research and Forecasting (WRF) NWP model. The proposed deep model consists of stacked LSTM layers that uses surface weather parameters over a given period of time for weather forecasting. The model is experimented with different number of LSTM layers, optimisers, and learning rates and optimised for effective short-term weather predictions. Our experiment shows that the proposed lightweight model produces better results compared to the well-known and complex WRF model, demonstrating its potential for efficient and accurate short-term weather forecasting

    Time series segmentation based on stationarity analysis to improve new samples prediction

    Get PDF
    A wide range of applications based on sequential data, named time series, have become increasingly popular in recent years, mainly those based on the Internet of Things (IoT). Several different machine learning algorithms exploit the patterns extracted from sequential data to support multiple tasks. However, this data can suffer from unreliable readings that can lead to low accuracy models due to the low-quality training sets available. Detecting the change point between high representative segments is an important ally to find and thread biased subsequences. By constructing a framework based on the Augmented Dickey-Fuller (ADF) test for data stationarity, two proposals to automatically segment subsequences in a time series were developed. The former proposal, called Change Detector segmentation, relies on change detection methods of data stream mining. The latter, called ADF-based segmentation, is constructed on a new change detector derived from the ADF test only. Experiments over real-file IoT databases and benchmarks showed the improvement provided by our proposals for prediction tasks with traditional Autoregressive integrated moving average (ARIMA) and Deep Learning (Long short-term memory and Temporal Convolutional Networks) methods. Results obtained by the Long short-term memory predictive model reduced the relative prediction error from 1 to 0.67, compared to time series without segmentation

    Prediksi Kunjungan Wisatawan Taman Nasional Gunung Merbabu dengan Time Series Forecasting dan LSTM

    Get PDF
    Abstract. Prediction of tourist visits of Mount Merbabu National Park (TNGMb) needs to be done to control the number of visitors and to preserve the national park. The combination of time series forecasting (TSF) and deep learning methods has become a new alternative for prediction. This case study was conducted to implement several methods combination of TSF and Long-Short Term Memory (LSTM) to predict the visits. In this case study, there are 18 modelling scenarios as research objects to determine the best model by utilizing tourist visits data from 2013 to 2018. The results show that the model applying the lag time method can improve the model's ability to capture patterns on time series data. The error value is measured using the root mean square error (RMSE), with the smallest value of 3.7 in the LSTM architecture, using seven lags as a feature and one lag as a label.Keywords: Tourist Visit, Taman Nasional Gunung Merbabu, Prediction, Recurrent Neural Network, Long-Short Term MemoryAbstrak. Prediksi kunjungan wisatawan Taman Nasional Gunung Merbabu (TNGMb) perlu dilakukan untul pengendalian jumlah pengunjung dan menjaga kelestarian taman nasional. Gabungan metode antara time series forecasting (TSF) dan deep learning telah menjadi alternatif baru untuk melakukan prediksi. Studi kasus ini dilakukan untuk mengimplementasi gabungan dari beberapa macam metode antara TSF dan Long-Short Term Memory (LSTM) untuk memprediksi kunjungan pada TNGMb. Pada studi kasus ini, terdapat 18 skenario pemodelan sebagai objek penelitian untuk menentukan model terbaik, dengan memanfaatkan data jumlah kunjungan wisatawan di TNGMb mulai dari tahun 2013 sampai dengan tahun 2018. Hasil prediksi menunjukkan pemodelan dengan menerapkan metode lag time dapat meningkatakan kemampuan model untuk menangkap pola pada data deret waktu. Besar nilai kesalahan diukur menggunakan root mean square error (RMSE), dengan nilai terkecil sebesar 3,7 pada arsitektur LSTM, menggunakan tujuh lag sebagai feature dan satu lag sebagai label. Kata Kunci: Kunjungan Wisatawan, Taman Nasional Gunung Merbabu, Prediksi, Recurrent Neural Network, Long-Short Term Memor

    Attention-based CNN-LSTM and XGBoost hybrid model for stock prediction

    Full text link
    Stock market plays an important role in the economic development. Due to the complex volatility of the stock market, the research and prediction on the change of the stock price, can avoid the risk for the investors. The traditional time series model ARIMA can not describe the nonlinearity, and can not achieve satisfactory results in the stock prediction. As neural networks are with strong nonlinear generalization ability, this paper proposes an attention-based CNN-LSTM and XGBoost hybrid model to predict the stock price. The model constructed in this paper integrates the time series model, the Convolutional Neural Networks with Attention mechanism, the Long Short-Term Memory network, and XGBoost regressor in a non-linear relationship, and improves the prediction accuracy. The model can fully mine the historical information of the stock market in multiple periods. The stock data is first preprocessed through ARIMA. Then, the deep learning architecture formed in pretraining-finetuning framework is adopted. The pre-training model is the Attention-based CNN-LSTM model based on sequence-to-sequence framework. The model first uses convolution to extract the deep features of the original stock data, and then uses the Long Short-Term Memory networks to mine the long-term time series features. Finally, the XGBoost model is adopted for fine-tuning. The results show that the hybrid model is more effective and the prediction accuracy is relatively high, which can help investors or institutions to make decisions and achieve the purpose of expanding return and avoiding risk. Source code is available at https://github.com/zshicode/Attention-CLX-stock-prediction.Comment: arXiv admin note: text overlap with arXiv:2202.1380

    Gas well performance prediction using deep learning jointly driven by decline curve analysis model and production data

    Get PDF
    The prediction of gas well performance is crucial for estimating the ultimate recovery rate of natural gas reservoirs. However, physics-based numerical simulation methods require a significant effort to build a robust model, while the decline curve analysis method used in this field is based on certain assumptions, hence its applications are limited due to the strict working conditions. In this work, a deep learning model driven jointly by the decline curve analysis model and production data is proposed for the production performance prediction of gas wells. Due to the time-series characteristics of gas well production data, the long short-term memory neural network is selected to establish the architecture of artificial intelligence. The existing decline curve analysis model is first implicitly incorporated into the training process of the neural network and then used to drive the neural network construction along with the actual gas well production historical data. By applying the proposed innovative model to analyze the conventional and tight gas well performance predictions based on field data, it is demonstrated that the proposed long short-term memory neural network deep learning model driven jointly by the decline curve analysis model and production data can effectively improve the interpretability and predictive ability of the traditional long short-term memory neural network model driven by production data alone. Compared with the data-driven model, the jointly driven model can reduce the mean absolute error by 42.90% and 13.65% for a tight gas well and a carbonate gas well, respectively.Document Type: Original articleCited as: Xue, L., Wang, J., Han, J., Yang, M., Mwasmwasa, M. S., Nanguka, F. Gas well performance prediction using deep learning jointly driven by decline curve analysis model and production data. Advances in Geo-Energy Research, 2023, 8(3): 159-169. https://doi.org/10.46690/ager.2023.06.0
    corecore