1,761 research outputs found

    A canonical space-time state space model: state and parameter estimation

    Get PDF
    The maximum likelihood estimation of a dynamic spatiotemporal model is introduced, centred around the inclusion of a prior arbitrary spatiotemporal neighborhood description. The neighborhood description defines a specific parameterization of the state transition matrix, chosen on the basis of prior knowledge about the system. The model used is inspired by the spatiotemporal ARMA (STARMA) model, but the representation used is based on the standard state-space model. The inclusion of the neighborhood into an expectation-maximization based joint state and parameter estimation algorithm allows for accurate characterization of the spatiotemporal model. The process of including the neighborhood, and the effect it has on the maximum likelihood parameter estimate is described and demonstrated in this paper

    An Optimal Stacked Ensemble Deep Learning Model for Predicting Time-Series Data Using a Genetic Algorithm—An Application for Aerosol Particle Number Concentrations

    Get PDF
    Time-series prediction is an important area that inspires numerous research disciplines for various applications, including air quality databases. Developing a robust and accurate model for time-series data becomes a challenging task, because it involves training different models and optimization. In this paper, we proposed and tested three machine learning techniques—recurrent neural networks (RNN), heuristic algorithm and ensemble learning—to develop a predictive model for estimating atmospheric particle number concentrations in the form of a time-series database. Here, the RNN included three variants—Long-Short Term Memory, Gated Recurrent Network, and Bi-directional Recurrent Neural Network—with various configurations. A Genetic Algorithm (GA) was then used to find the optimal time-lag in order to enhance the model’s performance. The optimized models were used to construct a stacked ensemble model as well as to perform the final prediction. The results demonstrated that the time-lag value can be optimized by using the heuristic algorithm; consequently, this improved the model prediction accuracy. Further improvement can be achieved by using ensemble learning that combines several models for better performance and more accurate predictions

    An Optimal Stacked Ensemble Deep Learning Model for Predicting Time-Series Data Using a Genetic Algorithm—An Application for Aerosol Particle Number Concentrations

    Get PDF
    Time-series prediction is an important area that inspires numerous research disciplines for various applications, including air quality databases. Developing a robust and accurate model for time-series data becomes a challenging task, because it involves training different models and optimization. In this paper, we proposed and tested three machine learning techniques—recurrent neural networks (RNN), heuristic algorithm and ensemble learning—to develop a predictive model for estimating atmospheric particle number concentrations in the form of a time-series database. Here, the RNN included three variants—Long-Short Term Memory, Gated Recurrent Network, and Bi-directional Recurrent Neural Network—with various configurations. A Genetic Algorithm (GA) was then used to find the optimal time-lag in order to enhance the model’s performance. The optimized models were used to construct a stacked ensemble model as well as to perform the final prediction. The results demonstrated that the time-lag value can be optimized by using the heuristic algorithm; consequently, this improved the model prediction accuracy. Further improvement can be achieved by using ensemble learning that combines several models for better performance and more accurate predictions

    Incremental construction of LSTM recurrent neural network

    Get PDF
    Long Short--Term Memory (LSTM) is a recurrent neural network that uses structures called memory blocks to allow the net remember significant events distant in the past input sequence in order to solve long time lag tasks, where other RNN approaches fail. Throughout this work we have performed experiments using LSTM networks extended with growing abilities, which we call GLSTM. Four methods of training growing LSTM has been compared. These methods include cascade and fully connected hidden layers as well as two different levels of freezing previous weights in the cascade case. GLSTM has been applied to a forecasting problem in a biomedical domain, where the input/output behavior of five controllers of the Central Nervous System control has to be modelled. We have compared growing LSTM results against other neural networks approaches, and our work applying conventional LSTM to the task at hand.Postprint (published version

    Attributes of Big Data Analytics for Data-Driven Decision Making in Cyber-Physical Power Systems

    Get PDF
    Big data analytics is a virtually new term in power system terminology. This concept delves into the way a massive volume of data is acquired, processed, analyzed to extract insight from available data. In particular, big data analytics alludes to applications of artificial intelligence, machine learning techniques, data mining techniques, time-series forecasting methods. Decision-makers in power systems have been long plagued by incapability and weakness of classical methods in dealing with large-scale real practical cases due to the existence of thousands or millions of variables, being time-consuming, the requirement of a high computation burden, divergence of results, unjustifiable errors, and poor accuracy of the model. Big data analytics is an ongoing topic, which pinpoints how to extract insights from these large data sets. The extant article has enumerated the applications of big data analytics in future power systems through several layers from grid-scale to local-scale. Big data analytics has many applications in the areas of smart grid implementation, electricity markets, execution of collaborative operation schemes, enhancement of microgrid operation autonomy, management of electric vehicle operations in smart grids, active distribution network control, district hub system management, multi-agent energy systems, electricity theft detection, stability and security assessment by PMUs, and better exploitation of renewable energy sources. The employment of big data analytics entails some prerequisites, such as the proliferation of IoT-enabled devices, easily-accessible cloud space, blockchain, etc. This paper has comprehensively conducted an extensive review of the applications of big data analytics along with the prevailing challenges and solutions

    Evolving CNN-LSTM Models for Time Series Prediction Using Enhanced Grey Wolf Optimizer

    Get PDF
    In this research, we propose an enhanced Grey Wolf Optimizer (GWO) for designing the evolving Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM) networks for time series analysis. To overcome the probability of stagnation at local optima and a slow convergence rate of the classical GWO algorithm, the newly proposed variant incorporates four distinctive search mechanisms. They comprise a nonlinear exploration scheme for dynamic search territory adjustment, a chaotic leadership dispatching strategy among the dominant wolves, a rectified spiral local exploitation action, as well as probability distribution-based leader enhancement. The evolving CNN-LSTM models are subsequently devised using the proposed GWO variant, where the network topology and learning hyperparameters are optimized for time series prediction and classification tasks. Evaluated using a number of benchmark problems, the proposed GWO-optimized CNN-LSTM models produce statistically significant results over those from several classical search methods and advanced GWO and Particle Swarm Optimization variants. Comparing with the baseline methods, the CNN-LSTM networks devised by the proposed GWO variant offer better representational capacities to not only capture the vital feature interactions, but also encapsulate the sophisticated dependencies in complex temporal contexts for undertaking time-series tasks

    Machine Learning for Load Profile Data Analytics and Short-term Load Forecasting

    Get PDF
    Short-term load forecasting (STLF) is a key issue for the operation and dispatch of day ahead energy market. It is a prerequisite for the economic operation of power systems and the basis of dispatching and making startup-shutdown plans, which plays a key role in the automatic control of power systems. Accurate power load forecasting not only help users choose a more appropriate electricity consumption scheme and reduces a lot of electric cost expenditure but also is conducive to optimizing the resources of power systems. This advantage helps while improving equipment utilization for reducing the production cost and improving the economic benefit, and improving power supply capability. Therefore, ultimately achieving the aim of efficient demand response program. This thesis outlines some machine learning based data driven models for STLF in smart grid. It also presents different policies and current statuses as well as future research direction for developing new STLF models. This thesis outlines three projects for load profile data analytics and machine learning based STLF models. First project is, load profile classification and determining load demand variability with the aim to estimate the load demand of a customer. In this project load profile data collected from smart meter are classified using recently developed extended nearest neighbor (ENN) algorithm. Here we have calculated generalized class wise statistics which will give the idea of load demand variability of a customer. Finally the load demand of a particular customer is estimated based on generalized class wise statistics, maximum load demand and minimum load demand. In the second project, a composite ENN model is proposed for STLF. The ENN model is proposed to improve the performance of k-nearest neighbor (kNN) algorithm based STLF models. In this project we have developed three individual models to process weather data i.e., temperature, social variables, and load demand data. The load demand is predicted separately for different input variables. Finally the load demand is forecasted from the weighted average of three models. The weights are determined based on the change in generalized class wise statistics. This projects provides a significant improvement in the performance of load forecasting accuracy compared to kNN based models. In the third project, an advanced data driven model is developed. Here, we have proposed a novel hybrid load forecasting model based on novel signal decomposition and correlation analysis. The hybrid model consists of improved empirical mode decomposition, T-Copula based correlation analysis. Finally we have employed deep belief network for making load demand forecasting. The results are compared with previous studies and it is evident that there is a significant improvement in mean absolute percentage error (MAPE) and root mean square error (RMSE)
    • …
    corecore