1,357 research outputs found

    Regional And Residential Short Term Electric Demand Forecast Using Deep Learning

    Get PDF
    For optimal power system operations, electric generation must follow load demand. The generation, transmission, and distribution utilities require load forecasting for planning and operating grid infrastructure efficiently, securely, and economically. This thesis work focuses on short-term load forecast (STLF), that concentrates on the time-interval from few hours to few days. An inaccurate short-term load forecast can result in higher cost of generating and delivering power. Hence, accurate short-term load forecasting is essential. Traditionally, short-term load forecasting of electrical demand is typically performed using linear regression, autoregressive integrated moving average models (ARIMA), and artificial neural networks (ANN). These conventional methods are limited in application for big datasets, and often their accuracy is a matter of concern. Recently, deep neural networks (DNNs) have emerged as a powerful tool for machine-learning problems, and known for real-time data processing, parallel computations, and ability to work with a large dataset with higher accuracy. DNNs have been shown to greatly outperform traditional methods in many disciplines, and they have revolutionized data analytics. Aspired from such a success of DNNs in machine learning problems, this thesis investigated the DNNs potential in electrical load forecasting application. Different DNN Types such as multilayer perception model (MLP) and recurrent neural networks (RNN) such as long short-term memory (LSTM), Gated recurrent Unit (GRU) and simple RNNs for different datasets were evaluated for accuracies. This thesis utilized the following data sets: 1) Iberian electric market dataset; 2) NREL residential home dataset; 3) AMPds smart-meter dataset; 4) UMass Smart Home datasets with varying time intervals or data duration for the validating the applicability of DNNs for short-term load forecasting. The Mean absolute percentage error (MAPE) evaluation indicates DNNs outperform conventional method for multiple datasets. In addition, a DNN based smart scheduling of appliances was also studied. This work evaluates MAPE accuracies of clustering-based forecast over non-clustered forecasts

    Defining and applying prediction performance metrics on a recurrent NARX time series model.

    No full text
    International audienceNonlinear autoregressive moving average with exogenous inputs (NARMAX) models have been successfully demonstrated for modeling the input-output behavior of many complex systems. This paper deals with the proposition of a scheme to provide time series prediction. The approach is based on a recurrent NARX model obtained by linear combination of a recurrent neural network (RNN) output and the real data output. Some prediction metrics are also proposed to assess the quality of predictions. This metrics enable to compare different prediction schemes and provide an objective way to measure how changes in training or prediction model (Neural network architecture) affect the quality of predictions. Results show that the proposed NARX approach consistently outperforms the prediction obtained by the RNN neural network

    Dynamic non-linear system modelling using wavelet-based soft computing techniques

    Get PDF
    The enormous number of complex systems results in the necessity of high-level and cost-efficient modelling structures for the operators and system designers. Model-based approaches offer a very challenging way to integrate a priori knowledge into the procedure. Soft computing based models in particular, can successfully be applied in cases of highly nonlinear problems. A further reason for dealing with so called soft computational model based techniques is that in real-world cases, many times only partial, uncertain and/or inaccurate data is available. Wavelet-Based soft computing techniques are considered, as one of the latest trends in system identification/modelling. This thesis provides a comprehensive synopsis of the main wavelet-based approaches to model the non-linear dynamical systems in real world problems in conjunction with possible twists and novelties aiming for more accurate and less complex modelling structure. Initially, an on-line structure and parameter design has been considered in an adaptive Neuro- Fuzzy (NF) scheme. The problem of redundant membership functions and consequently fuzzy rules is circumvented by applying an adaptive structure. The growth of a special type of Fungus (Monascus ruber van Tieghem) is examined against several other approaches for further justification of the proposed methodology. By extending the line of research, two Morlet Wavelet Neural Network (WNN) structures have been introduced. Increasing the accuracy and decreasing the computational cost are both the primary targets of proposed novelties. Modifying the synoptic weights by replacing them with Linear Combination Weights (LCW) and also imposing a Hybrid Learning Algorithm (HLA) comprising of Gradient Descent (GD) and Recursive Least Square (RLS), are the tools utilised for the above challenges. These two models differ from the point of view of structure while they share the same HLA scheme. The second approach contains an additional Multiplication layer, plus its hidden layer contains several sub-WNNs for each input dimension. The practical superiority of these extensions is demonstrated by simulation and experimental results on real non-linear dynamic system; Listeria Monocytogenes survival curves in Ultra-High Temperature (UHT) whole milk, and consolidated with comprehensive comparison with other suggested schemes. At the next stage, the extended clustering-based fuzzy version of the proposed WNN schemes, is presented as the ultimate structure in this thesis. The proposed Fuzzy Wavelet Neural network (FWNN) benefitted from Gaussian Mixture Models (GMMs) clustering feature, updated by a modified Expectation-Maximization (EM) algorithm. One of the main aims of this thesis is to illustrate how the GMM-EM scheme could be used not only for detecting useful knowledge from the data by building accurate regression, but also for the identification of complex systems. The structure of FWNN is based on the basis of fuzzy rules including wavelet functions in the consequent parts of rules. In order to improve the function approximation accuracy and general capability of the FWNN system, an efficient hybrid learning approach is used to adjust the parameters of dilation, translation, weights, and membership. Extended Kalman Filter (EKF) is employed for wavelet parameters adjustment together with Weighted Least Square (WLS) which is dedicated for the Linear Combination Weights fine-tuning. The results of a real-world application of Short Time Load Forecasting (STLF) further re-enforced the plausibility of the above technique

    Improving the prediction accuracy of recurrent neural network by a PID controller.

    No full text
    International audienceIn maintenance field, prognostic is recognized as a key feature as the prediction of the remaining useful life of a system which allows avoiding inopportune maintenance spending. Assuming that it can be difficult to provide models for that purpose, artificial neural networks appear to be well suited. In this paper, an approach combining a Recurrent Radial Basis Function network (RRBF) and a proportional integral derivative controller (PID) is proposed in order to improve the accuracy of predictions. The PID controller attempts to correct the error between the real process variable and the neural network predictions

    From statistical- to machine learning-based network traffic prediction

    Get PDF
    Nowadays, due to the exponential and continuous expansion of new paradigms such as Internet of Things (IoT), Internet of Vehicles (IoV) and 6G, the world is witnessing a tremendous and sharp increase of network traffic. In such large-scale, heterogeneous, and complex networks, the volume of transferred data, as big data, is considered a challenge causing different networking inefficiencies. To overcome these challenges, various techniques are introduced to monitor the performance of networks, called Network Traffic Monitoring and Analysis (NTMA). Network Traffic Prediction (NTP) is a significant subfield of NTMA which is mainly focused on predicting the future of network load and its behavior. NTP techniques can generally be realized in two ways, that is, statistical- and Machine Learning (ML)-based. In this paper, we provide a study on existing NTP techniques through reviewing, investigating, and classifying the recent relevant works conducted in this field. Additionally, we discuss the challenges and future directions of NTP showing that how ML and statistical techniques can be used to solve challenges of NTP.publishedVersio

    Review of Low Voltage Load Forecasting: Methods, Applications, and Recommendations

    Full text link
    The increased digitalisation and monitoring of the energy system opens up numerous opportunities to decarbonise the energy system. Applications on low voltage, local networks, such as community energy markets and smart storage will facilitate decarbonisation, but they will require advanced control and management. Reliable forecasting will be a necessary component of many of these systems to anticipate key features and uncertainties. Despite this urgent need, there has not yet been an extensive investigation into the current state-of-the-art of low voltage level forecasts, other than at the smart meter level. This paper aims to provide a comprehensive overview of the landscape, current approaches, core applications, challenges and recommendations. Another aim of this paper is to facilitate the continued improvement and advancement in this area. To this end, the paper also surveys some of the most relevant and promising trends. It establishes an open, community-driven list of the known low voltage level open datasets to encourage further research and development.Comment: 37 pages, 6 figures, 2 tables, review pape

    Evolving neuro-fuzzy tools for system classification and prediction

    Get PDF
    "Classification and prediction algorithims have recently become very powerful tools to a wide array of real-world applications. Some real world applications include system condition monitoring, bioinformatics, robotics, predictive control, earthquake prediction, weather forecasting, stock market and traffic pattern prediction, just to name a few. Within this work, several novel approaches, as well as modifications to some existing approaches, are introduced in order to improve the performance of current classification and prediction paradigms. In the first section of this work, a novel weighted recurrent neuro-fuzzy inference system is introduced alongside two existing neural networks. It is found that the novel design outperforms both the existing neural networks in terms of equal-step and sequential-step inputs for time-series forecasting. The second contribution of this work is the development of a novel evolving clustering algorithim for classification and prediction. This particular algorithim starts without any priori knowledge of the distribution of the data set. The novel design is capable of revealing the true cluster configuration in a single pass of the data, estimating the location and variance of each cluster. After a rigorous performance evaluation, it is found that the novel design outperforms many existing clustering approaches including the well-known potential-based evolving Takagi-Sugeno (eTS) clustering scheme. The third and fourth contributions of this work are the development of a second novel clustering technique and a novel hybrid training technique. The clustering technique is a combination of the aforementioned scheme and the potential-based technique. The new training algorithm is a combination of the decoupled-extended Kalman filter (for the backward pass) and the recursive least-sequares estimate (for the forward pass). It is found that the novel clustering technique outperforms many available clustering techniques. Also, the novel training algorithm is proven to outperform most existing training techniques."--Abstrac
    corecore