312 research outputs found

    Demand forecasting for a Mixed-Use Building using an Agent-schedule information Data-Driven Model

    Get PDF
    There is great interest in data-driven modelling for the forecasting of building energy consumption while using machine learning (ML) modelling. However, little research considers classification-based ML models. This paper compares the regression and classification ML models for daily electricity and thermal load modelling in a large, mixed-use, university building. The independent feature variables of the model include outdoor temperature, historical energy consumption data sets, and several types of ‘agent schedules’ that provide proxy information that is based on broad classes of activity undertaken by the building’s inhabitants. The case study compares four different ML models testing three different feature sets with a genetic algorithm (GA) used to optimize the feature sets for those ML models without an embedded feature selection process. The results show that the regression models perform significantly better than classification models for the prediction of electricity demand and slightly better for the prediction of heat demand. The GA feature selection improves the performance of all models and demonstrates that historical heat demand, temperature, and the ‘agent schedules’, which derive from large occupancy fluctuations in the building, are the main factors influencing the heat demand prediction. For electricity demand prediction, feature selection picks almost all ‘agent schedule’ features that are available and the historical electricity demand. Historical heat demand is not picked as a feature for electricity demand prediction by the GA feature selection and vice versa. However, the exclusion of historical heat/electricity demand from the selected features significantly reduces the performance of the demand prediction

    Forecasting of residential unit's heat demands: a comparison of machine learning techniques in a real-world case study

    Get PDF
    A large proportion of the energy consumed by private households is used for space heating and domestic hot water. In the context of the energy transition, the predominant aim is to reduce this consumption. In addition to implementing better energy standards in new buildings and refurbishing old buildings, intelligent energy management concepts can also contribute by operating heat generators according to demand based on an expected heat requirement. This requires forecasting models for heat demand to be as accurate and reliable as possible. In this paper, we present a case study of a newly built medium-sized living quarter in central Europe made up of 66 residential units from which we gathered consumption data for almost two years. Based on this data, we investigate the possibility of forecasting heat demand using a variety of time series models and offline and online machine learning (ML) techniques in a standard data science approach. We chose to analyze different modeling techniques as they can be used in different settings, where time series models require no additional data, offline ML needs a lot of data gathered up front, and online ML could be deployed from day one. A special focus lies on peak demand and outlier forecasting, as well as investigations into seasonal expert models. We also highlight the computational expense and explainability characteristics of the used models. We compare the used methods with naive models as well as each other, finding that time series models, as well as online ML, do not yield promising results. Accordingly, we will deploy one of the offline ML models in our real-world energy management system in the near future

    Accommodating maintenance in prognostics

    Get PDF
    Error on title page - year of award is 2021Steam turbines are an important asset of nuclear power plants, and are required to operate reliably and efficiently. Unplanned outages have a significant impact on the ability of the plant to generate electricity. Therefore, condition-based maintenance (CBM) can be used for predictive and proactive maintenance to avoid unplanned outages while reducing operating costs and increasing the reliability and availability of the plant. In CBM, the information gathered can be interpreted for prognostics (the prediction of failure time or remaining useful life (RUL)). The aim of this project was to address two areas of challenges in prognostics, the selection of predictive technique and accommodation of post-maintenance effects, to improve the efficacy of prognostics. The selection of an appropriate predictive algorithm is a key activity for an effective development of prognostics. In this research, a formal approach for the evaluation and selection of predictive techniques is developed to facilitate a methodic selection process of predictive techniques by engineering experts. This approach is then implemented for a case study provided by the engineering experts. Therefore, as a result of formal evaluation, a probabilistic technique the Bayesian Linear Regression (BLR) and a non-probabilistic technique the Support Vector Regression (SVR) were selected for prognostics implementation. In this project, the knowledge of prognostics implementation is extended by including post maintenance affects into prognostics. Maintenance aims to restore a machine into a state where it is safe and reliable to operate while recovering the health of the machine. However, such activities result in introduction of uncertainties that are associated with predictions due to deviations in degradation model. Thus, affecting accuracy and efficacy of predictions. Therefore, such vulnerabilities must be addressed by incorporating the information from maintenance events for accurate and reliable predictions. This thesis presents two frameworks which are adapted for probabilistic and non-probabilistic prognostic techniques to accommodate maintenance. Two case studies: a real-world case study from a nuclear power plant in the UK and a synthetic case study which was generated based on the characteristics of a real-world case study are used for the implementation and validation of the frameworks. The results of the implementation hold a promise for predicting remaining useful life while accommodating maintenance repairs. Therefore, ensuring increased asset availability with higher reliability, maintenance cost effectiveness and operational safety.Steam turbines are an important asset of nuclear power plants, and are required to operate reliably and efficiently. Unplanned outages have a significant impact on the ability of the plant to generate electricity. Therefore, condition-based maintenance (CBM) can be used for predictive and proactive maintenance to avoid unplanned outages while reducing operating costs and increasing the reliability and availability of the plant. In CBM, the information gathered can be interpreted for prognostics (the prediction of failure time or remaining useful life (RUL)). The aim of this project was to address two areas of challenges in prognostics, the selection of predictive technique and accommodation of post-maintenance effects, to improve the efficacy of prognostics. The selection of an appropriate predictive algorithm is a key activity for an effective development of prognostics. In this research, a formal approach for the evaluation and selection of predictive techniques is developed to facilitate a methodic selection process of predictive techniques by engineering experts. This approach is then implemented for a case study provided by the engineering experts. Therefore, as a result of formal evaluation, a probabilistic technique the Bayesian Linear Regression (BLR) and a non-probabilistic technique the Support Vector Regression (SVR) were selected for prognostics implementation. In this project, the knowledge of prognostics implementation is extended by including post maintenance affects into prognostics. Maintenance aims to restore a machine into a state where it is safe and reliable to operate while recovering the health of the machine. However, such activities result in introduction of uncertainties that are associated with predictions due to deviations in degradation model. Thus, affecting accuracy and efficacy of predictions. Therefore, such vulnerabilities must be addressed by incorporating the information from maintenance events for accurate and reliable predictions. This thesis presents two frameworks which are adapted for probabilistic and non-probabilistic prognostic techniques to accommodate maintenance. Two case studies: a real-world case study from a nuclear power plant in the UK and a synthetic case study which was generated based on the characteristics of a real-world case study are used for the implementation and validation of the frameworks. The results of the implementation hold a promise for predicting remaining useful life while accommodating maintenance repairs. Therefore, ensuring increased asset availability with higher reliability, maintenance cost effectiveness and operational safety

    Rainfall Analysis and Forecasting Using Deep Learning Technique

    Get PDF
    Rainfall forecasting is very challenging due to its uncertain nature and dynamic climate change. It's always been a challenging task for meteorologists. In various papers for rainfall prediction, different Data Mining and Machine Learning (ML) techniques have been used. These techniques show better predictive accuracy. A deep learning approach has been used in this study to analyze the rainfall data of the Karnataka Subdivision. Three deep learning methods have been used for prediction such as Artificial Neural Network (ANN) - Feed Forward Neural Network, Simple Recurrent Neural Network (RNN), and the Long Short-Term Memory (LSTM) optimized RNN Technique. In this paper, a comparative study of these three techniques for monthly rainfall prediction has been given and the prediction performance of these three techniques has been evaluated using the Mean Absolute Percentage Error (MAPE%) and a Root Mean Squared Error (RMSE%). The results show that the LSTM Model shows better performance as compared to ANN and RNN for Prediction. The LSTM model shows better performance with mini-mum Mean Absolute Percentage Error (MAPE%) and Root Mean Squared Error (RMSE%)

    Energy Consumption Prediction with Big Data: Balancing Prediction Accuracy and Computational Resources

    Get PDF
    In recent years, advances in sensor technologies and expansion of smart meters have resulted in massive growth of energy data sets. These Big Data have created new opportunities for energy prediction, but at the same time, they impose new challenges for traditional technologies. On the other hand, new approaches for handling and processing these Big Data have emerged, such as MapReduce, Spark, Storm, and Oxdata H2O. This paper explores how findings from machine learning with Big Data can benefit energy consumption prediction. An approach based on local learning with support vector regression (SVR) is presented. Although local learning itself is not a novel concept, it has great potential in the Big Data domain because it reduces computational complexity. The local SVR approach presented here is compared to traditional SVR and to deep neural networks with an H2O machine learning platform for Big Data. Local SVR outperformed both SVR and H2O deep learning in terms of prediction accuracy and computation time. Especially significant was the reduction in training time; local SVR training was an order of magnitude faster than SVR or H2O deep learning

    A hybrid ensemble method with negative correlation learning for regression

    Full text link
    Hybrid ensemble, an essential branch of ensembles, has flourished in numerous machine learning problems, especially regression. Several studies have confirmed the importance of diversity; however, previous ensembles only consider diversity in the sub-model training stage, with limited improvement compared to single models. In contrast, this study selects and weights sub-models from a heterogeneous model pool automatically. It solves an optimization problem using an interior-point filtering linear-search algorithm. This optimization problem innovatively incorporates negative correlation learning as a penalty term, with which a diverse model subset can be selected. Experimental results show some meaningful points. Model pool construction requires different classes of models, with all possible parameter sets for each class as sub-models. The best sub-models from each class are selected to construct an NCL-based ensemble, which is far more better than the average of the sub-models. Furthermore, comparing with classical constant and non-constant weighting methods, NCL-based ensemble has a significant advantage in several prediction metrics. In practice, it is difficult to conclude the optimal sub-model for a dataset prior due to the model uncertainty. However, our method would achieve comparable accuracy as the potential optimal sub-models on RMSE metric. In conclusion, the value of this study lies in its ease of use and effectiveness, allowing the hybrid ensemble to embrace both diversity and accuracy.Comment: 37 pages, 14 figures, 11 table
    corecore