2,817 research outputs found
Wind Power Forecasting Methods Based on Deep Learning: A Survey
Accurate wind power forecasting in wind farm can effectively reduce the enormous impact on grid operation safety when high permeability intermittent power supply is connected to the power grid. Aiming to provide reference strategies for relevant researchers as well as practical applications, this paper attempts to provide the literature investigation and methods analysis of deep learning, enforcement learning and transfer learning in wind speed and wind power forecasting modeling. Usually, wind speed and wind power forecasting around a wind farm requires the calculation of the next moment of the definite state, which is usually achieved based on the state of the atmosphere that encompasses nearby atmospheric pressure, temperature, roughness, and obstacles. As an effective method of high-dimensional feature extraction, deep neural network can theoretically deal with arbitrary nonlinear transformation through proper structural design, such as adding noise to outputs, evolutionary learning used to optimize hidden layer weights, optimize the objective function so as to save information that can improve the output accuracy while filter out the irrelevant or less affected information for forecasting. The establishment of high-precision wind speed and wind power forecasting models is always a challenge due to the randomness, instantaneity and seasonal characteristics
Short-Term Forecasting of Passenger Demand under On-Demand Ride Services: A Spatio-Temporal Deep Learning Approach
Short-term passenger demand forecasting is of great importance to the
on-demand ride service platform, which can incentivize vacant cars moving from
over-supply regions to over-demand regions. The spatial dependences, temporal
dependences, and exogenous dependences need to be considered simultaneously,
however, which makes short-term passenger demand forecasting challenging. We
propose a novel deep learning (DL) approach, named the fusion convolutional
long short-term memory network (FCL-Net), to address these three dependences
within one end-to-end learning architecture. The model is stacked and fused by
multiple convolutional long short-term memory (LSTM) layers, standard LSTM
layers, and convolutional layers. The fusion of convolutional techniques and
the LSTM network enables the proposed DL approach to better capture the
spatio-temporal characteristics and correlations of explanatory variables. A
tailored spatially aggregated random forest is employed to rank the importance
of the explanatory variables. The ranking is then used for feature selection.
The proposed DL approach is applied to the short-term forecasting of passenger
demand under an on-demand ride service platform in Hangzhou, China.
Experimental results, validated on real-world data provided by DiDi Chuxing,
show that the FCL-Net achieves better predictive performance than traditional
approaches including both classical time-series prediction models and neural
network based algorithms (e.g., artificial neural network and LSTM). This paper
is one of the first DL studies to forecast the short-term passenger demand of
an on-demand ride service platform by examining the spatio-temporal
correlations.Comment: 39 pages, 10 figure
Short-term forecasting of wind energy: A comparison of deep learning frameworks
Wind energy has been recognized as the most promising and economical renewable energy source, attracting increasing attention in recent years. However, considering the variability and uncertainty of wind energy, accurate forecasting is crucial to propel high levels of wind energy penetration within electricity markets. In this paper, a comparative framework is proposed where a suite of long short-term memory (LSTM) recurrent neural networks (RNN) models, inclusive of standard, bidirectional, stacked, convolutional, and autoencoder architectures, are implemented to address the existing gaps and limitations of reported wind power forecasting methodologies. These integrated networks are implemented through an iterative process of varying hyperparameters to better assess their effect, and the overall performance of each architecture, when tackling one-hour to three-hours ahead wind power forecasting. The corresponding validation is carried out through hourly wind power data from the Spanish electricity market, collected between 2014 and 2020. The proposed comparative error analysis shows that, overall, the models tend to showcase low error variability and better performance when the networks are able to learn in weekly sequences. The model with the best performance in forecasting one-hour ahead wind power is the stacked LSTM, implemented with weekly learning input sequences, with an average MAPE improvement of roughly 6, 7, and 49%, when compared to standard, bidirectional, and convolutional LSTM models, respectively. In the case of two to three-hours ahead forecasting, the model with the best overall performance is the bidirectional LSTM implemented with weekly learning input sequences, showcasing an average improved MAPE performance from 2 to 23% when compared to the other LSTM architectures implemented
A review on Day-Ahead Solar Energy Prediction
Accurate day-ahead prediction of solar energy plays a vital role in the planning of supply and demand in a power grid system. The previous study shows predictions based on weather forecasts composed of numerical text data. They can reflect temporal factors therefore the data versus the result might not always give the most accurate and precise results. That is why incorporating different methods and techniques which enhance accuracy is an important topic. An in-depth review of current deep learning-based forecasting models for renewable energy is provided in this paper
Exploring Interpretable LSTM Neural Networks over Multi-Variable Data
For recurrent neural networks trained on time series with target and
exogenous variables, in addition to accurate prediction, it is also desired to
provide interpretable insights into the data. In this paper, we explore the
structure of LSTM recurrent neural networks to learn variable-wise hidden
states, with the aim to capture different dynamics in multi-variable time
series and distinguish the contribution of variables to the prediction. With
these variable-wise hidden states, a mixture attention mechanism is proposed to
model the generative process of the target. Then we develop associated training
methods to jointly learn network parameters, variable and temporal importance
w.r.t the prediction of the target variable. Extensive experiments on real
datasets demonstrate enhanced prediction performance by capturing the dynamics
of different variables. Meanwhile, we evaluate the interpretation results both
qualitatively and quantitatively. It exhibits the prospect as an end-to-end
framework for both forecasting and knowledge extraction over multi-variable
data.Comment: Accepted to International Conference on Machine Learning (ICML), 201
Temporal Spatial Decomposition and Fusion Network for Time Series Forecasting
Feature engineering is required to obtain better results for time series
forecasting, and decomposition is a crucial one. One decomposition approach
often cannot be used for numerous forecasting tasks since the standard time
series decomposition lacks flexibility and robustness. Traditional feature
selection relies heavily on preexisting domain knowledge, has no generic
methodology, and requires a lot of labor. However, most time series prediction
models based on deep learning typically suffer from interpretability issue, so
the "black box" results lead to a lack of confidence. To deal with the above
issues forms the motivation of the thesis. In the paper we propose TSDFNet as a
neural network with self-decomposition mechanism and an attentive feature
fusion mechanism, It abandons feature engineering as a preprocessing convention
and creatively integrates it as an internal module with the deep model. The
self-decomposition mechanism empowers TSDFNet with extensible and adaptive
decomposition capabilities for any time series, users can choose their own
basis functions to decompose the sequence into temporal and generalized spatial
dimensions. Attentive feature fusion mechanism has the ability to capture the
importance of external variables and the causality with target variables. It
can automatically suppress the unimportant features while enhancing the
effective ones, so that users do not have to struggle with feature selection.
Moreover, TSDFNet is easy to look into the "black box" of the deep neural
network by feature visualization and analyze the prediction results. We
demonstrate performance improvements over existing widely accepted models on
more than a dozen datasets, and three experiments showcase the interpretability
of TSDFNet.Comment: 10 page
- …