280 research outputs found

    An Experimental Review on Deep Learning Architectures for Time Series Forecasting

    Get PDF
    In recent years, deep learning techniques have outperformed traditional models in many machine learning tasks. Deep neural networks have successfully been applied to address time series forecasting problems, which is a very important topic in data mining. They have proved to be an effective solution given their capacity to automatically learn the temporal dependencies present in time series. However, selecting the most convenient type of deep neural network and its parametrization is a complex task that requires considerable expertise. Therefore, there is a need for deeper studies on the suitability of all existing architectures for different forecasting tasks. In this work, we face two main challenges: a comprehensive review of the latest works using deep learning for time series forecasting; and an experimental study comparing the performance of the most popular architectures. The comparison involves a thorough analysis of seven types of deep learning models in terms of accuracy and efficiency. We evaluate the rankings and distribution of results obtained with the proposed models under many different architecture configurations and training hyperparameters. The datasets used comprise more than 50000 time series divided into 12 different forecasting problems. By training more than 38000 models on these data, we provide the most extensive deep learning study for time series forecasting. Among all studied models, the results show that long short-term memory (LSTM) and convolutional networks (CNN) are the best alternatives, with LSTMs obtaining the most accurate forecasts. CNNs achieve comparable performance with less variability of results under different parameter configurations, while also being more efficient

    An Experimental Review on Deep Learning Architectures for Time Series Forecasting

    Get PDF
    In recent years, deep learning techniques have outperformed traditional models in many machine learning tasks. Deep neural networks have successfully been applied to address time series forecasting problems, which is a very important topic in data mining. They have proved to be an effective solution given their capacity to automatically learn the temporal dependencies present in time series. However, selecting the most convenient type of deep neural network and its parametrization is a complex task that requires considerable expertise. Therefore, there is a need for deeper studies on the suitability of all existing architectures for different forecasting tasks. In this work, we face two main challenges: a comprehensive review of the latest works using deep learning for time series forecasting and an experimental study comparing the performance of the most popular architectures. The comparison involves a thorough analysis of seven types of deep learning models in terms of accuracy and efficiency. We evaluate the rankings and distribution of results obtained with the proposed models under many different architecture configurations and training hyperparameters. The datasets used comprise more than 50,000 time series divided into 12 different forecasting problems. By training more than 38,000 models on these data, we provide the most extensive deep learning study for time series forecasting. Among all studied models, the results show that long short-term memory (LSTM) and convolutional networks (CNN) are the best alternatives, with LSTMs obtaining the most accurate forecasts. CNNs achieve comparable performance with less variability of results under different parameter configurations, while also being more efficient.Ministerio de Ciencia, Innovación y Universidades TIN2017-88209-C2Junta de Andalucía US-1263341Junta de Andalucía P18-RT-277

    A recurrent neural network approach to quantitatively studying solar wind effects on TEC derived from GPS; preliminary results

    Get PDF
    This paper attempts to describe the search for the parameter(s) to represent solar wind effects in Global Positioning System total electron content (GPS TEC) modelling using the technique of neural networks (NNs). A study is carried out by including solar wind velocity (Vsw), proton number density (Np) and the Bz component of the interplanetary magnetic field (IMF Bz) obtained from the Advanced Composition Explorer (ACE) satellite as separate inputs to the NN each along with day number of the year (DN), hour (HR), a 4-month running mean of the daily sunspot number (R4) and the running mean of the previous eight 3-hourly magnetic A index values (A8). Hourly GPS TEC values derived from a dual frequency receiver located at Sutherland (32.38° S, 20.81° E), South Africa for 8 years (2000–2007) have been used to train the Elman neural network (ENN) and the result has been used to predict TEC variations for a GPS station located at Cape Town (33.95° S, 18.47° E). Quantitative results indicate that each of the parameters considered may have some degree of influence on GPS TEC at certain periods although a decrease in prediction accuracy is also observed for some parameters for different days and seasons. It is also evident that there is still a difficulty in predicting TEC values during disturbed conditions. The improvements and degradation in prediction accuracies are both close to the benchmark values which lends weight to the belief that diurnal, seasonal, solar and magnetic variabilities may be the major determinants of TEC variability

    The Challenge of Machine Learning in Space Weather Nowcasting and Forecasting

    Get PDF
    The numerous recent breakthroughs in machine learning (ML) make imperative to carefully ponder how the scientific community can benefit from a technology that, although not necessarily new, is today living its golden age. This Grand Challenge review paper is focused on the present and future role of machine learning in space weather. The purpose is twofold. On one hand, we will discuss previous works that use ML for space weather forecasting, focusing in particular on the few areas that have seen most activity: the forecasting of geomagnetic indices, of relativistic electrons at geosynchronous orbits, of solar flares occurrence, of coronal mass ejection propagation time, and of solar wind speed. On the other hand, this paper serves as a gentle introduction to the field of machine learning tailored to the space weather community and as a pointer to a number of open challenges that we believe the community should undertake in the next decade. The recurring themes throughout the review are the need to shift our forecasting paradigm to a probabilistic approach focused on the reliable assessment of uncertainties, and the combination of physics-based and machine learning approaches, known as gray-box.Comment: under revie

    Neural network prediction of geomagnetic activity: a method using local H\"{o}lder exponents

    Full text link
    Local scaling and singularity properties of solar wind and geomagnetic time series were analysed using H\"{o}lder exponents α\alpha. It was shown that in analysed cases due to multifractality of fluctuations α\alpha changes from point to point. We argued there exists a peculiar interplay between regularity / irregularity and amplitude characteristics of fluctuations which could be exploited for improvement of predictions of geomagnetic activity. To this end layered backpropagation artificial neural network model with feedback connection was used for the study of the solar wind - magnetosphere coupling and prediction of geomagnetic DstD_{st} index. The solar wind input was taken from principal component analysis of interplanetary magnetic field, proton density and bulk velocity. Superior network performance was achieved in cases when the information on local H\"{o}lder exponents was added to the input layer.Comment: 17 pages, 7 figure
    corecore