3 research outputs found
Recommended from our members
Deep learning driven data analytics for smart grids
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonAs advanced metering infrastructure (AMI) and wide area monitoring systems (WAMSs) are being deployed rapidly and widely, the conventional power grid is transitioning towards the smart grid at an increasing speed. A number of smart metering devices and real-time monitoring systems are capable to generate a huge volume of data on a daily basis. However, a variety of generated data can be made full use of to advance the development of the smart grid through big data analytics, especially, deep learning. Thus, the thesis is focused on data analysis for smart grids from three different aspects.
Firstly, a real-time data driven event detection method is presented, which is quite robust when dealing with corrupted and significantly noisy data of phase measurement units (PMUs). To be specific, the presented event detection method is based on a novel combination of random matrix theory (RMT) and Kalman filtering. Furthermore, a dynamic Kalman filtering technique is proposed through the adjustment of the measurement noise covariance matrix as the data conditioner of the presented method in order to condition PMU data. The experimental results show that the presented method is indeed quite robust in such practical situations that include significant levels of noisy or missing PMU data.
Secondly, a short-term residential load forecasting method is proposed on the basis of deep learning and k-means clustering, which is capable to extract similarity of residential load effectively and perform prediction accurately at the individual residential level. Specifically, it makes full use of k-means clustering to extract similarity among residential load and deep learning to extract complex patterns of residential load. In addition, in order to improve the forecasting accuracy, a comprehensive feature expression strategy is utilised to describe load characteristics of each time step in detail. The experimental results suggest that the proposed method can achieve a high forecasting accuracy in terms of both root mean square error (RMSE) and mean absolute error (MAE).
Thirdly, an online individual residential load forecasting method is developed based on a combination of deep learning and dynamic mirror descent (DMD), which is able to predict residential load in real time and adjust the prediction error over time in order to improve the prediction performance. More specifically, it firstly employs a long short term memory (LSTM) network to build a prediction model offline, and then applies it online with DMD correcting the prediction error. In order to increase the prediction accuracy, a comprehensive feature expression strategy is used to describe load characteristics at each time step in detail. The experimental results indicate that the developed method can obtain a high prediction accuracy in terms of both RMSE and MAE.
To sum up, the proposed real-time event detection method contributes to the monitoring and operation of smart grids, while the proposed residential load forecasting methods contribute to the demand side response in smart grids.TDX-ASSIS
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page