202 research outputs found

    Use of Optimised LSTM Neural Networks Pre-Trained With Synthetic Data to Estimate PV Generation

    Get PDF
    Optimising the use of the photovoltaic (PV) energy is essential to reduce fossil fuel emissions by increasing the use of solar power generation. In recent years, research has focused on physical simulations or artifical intelligence models attempting to increase the accuracy of PV generation predictions. The use of simulated data as pre-training for deep learning models has increased in different fields. The reasons are the higher efficiency in the subsequent training with real data and the possibility of not having real data available. This work presents a methodology, based on an deep learning model optimised with specific techniques and pre-trained with synthetic data, to estimate the generation of a PV system. A case study of a photovoltaic installation with 296 PV panels located in northwest Spain is presented. The results show that the model with proper pre-training trains six to seven times faster than a model without pre-training and three to four times faster than a model pre-trained with non-accurate simulated data. In terms of accuracy and considering a homogeneous training process, all models obtained average relative errors around 12%, except the model with incorrect pre-training which performs worse

    Deep learning-based hybrid short-term solar forecast using sky images and meteorological data

    Get PDF
    The global growth of solar power generation is rapid, yet the complex nature of cloud movement introduces significant uncertainty to short-term solar irradiance, posing challenges for intelligent power systems. Accurate short-term solar irradiance and photovoltaic power generation predictions under cloudy skies are critical for sub-hourly electricity markets. Ground-based image (GSI) analysis using convolutional neural network (CNN) algorithms has emerged as a promising method due to advancements in machine vision models based on deep learning networks. In this work, a novel deep network, ”ViT-E,” based on an attention mechanism Transformer architecture for short-term solar irradiance forecasting has been proposed. This innovative model enables cross-modality data parsing by establishing mapping relationships within GSI and between GSI, meteorological data, historical irradiation, clear sky irradiation, and solar angles. The feasibility of the ViT-E network was assessed the Folsom dataset from California, USA. Quantitative analysis showed that the ViT-E network achieved RMSE values of 81.45 W/m2 , 98.68 W/m2 , and 104.91 W/m2 for 2, 6, and 10-minute forecasts, respectively, outperforming the persistence model by 4.87%, 16.06%, and 19.09% and displaying performance comparable to CNN-based models. Qualitative analysis revealed that the ViT-E network successfully predicted 20.21%, 33.26%, and 36.87% of solar slope events at 2, 6, and 10 minutes in advance, respectively, significantly surpassing the persistence model and currently prevalent CNN-based model by 9.43%, 3.91%, and -0.55% for 2, 6, and 10-minute forecasts, respectively. Transfer learning experiments were conducted to test the ViT-E model’s generalisation under different climatic conditions and its performance on smaller datasets. We discovered that the weights learned from the three-year Folsom dataset in the United States could be transferred to a half-year local dataset in Nottingham, UK. Training with a dataset one-fifth the size of the original dataset achieved baseline accuracy standards and reduced training time by 80.2%. Additionally, using a dataset equivalent to only 4.5% of the original size yielded a model with less than 2% accuracy below the baseline. These findings validated the generalisation and robustness of the model’s trained weights. Finally, the ViT-E model architecture and hyperparameters were optimised and searched. Our investigation revealed that directly applying migrated deep vision models leads to redundancy in solar forecasting. We identified the best hyperparameters for ViT-E through manual hyperparameter space exploration. As a result, the model’s computational efficiency improved by 60%, and prediction performance increased by 2.7%

    Deep learning-based hybrid short-term solar forecast using sky images and meteorological data

    Get PDF
    The global growth of solar power generation is rapid, yet the complex nature of cloud movement introduces significant uncertainty to short-term solar irradiance, posing challenges for intelligent power systems. Accurate short-term solar irradiance and photovoltaic power generation predictions under cloudy skies are critical for sub-hourly electricity markets. Ground-based image (GSI) analysis using convolutional neural network (CNN) algorithms has emerged as a promising method due to advancements in machine vision models based on deep learning networks. In this work, a novel deep network, ”ViT-E,” based on an attention mechanism Transformer architecture for short-term solar irradiance forecasting has been proposed. This innovative model enables cross-modality data parsing by establishing mapping relationships within GSI and between GSI, meteorological data, historical irradiation, clear sky irradiation, and solar angles. The feasibility of the ViT-E network was assessed the Folsom dataset from California, USA. Quantitative analysis showed that the ViT-E network achieved RMSE values of 81.45 W/m2 , 98.68 W/m2 , and 104.91 W/m2 for 2, 6, and 10-minute forecasts, respectively, outperforming the persistence model by 4.87%, 16.06%, and 19.09% and displaying performance comparable to CNN-based models. Qualitative analysis revealed that the ViT-E network successfully predicted 20.21%, 33.26%, and 36.87% of solar slope events at 2, 6, and 10 minutes in advance, respectively, significantly surpassing the persistence model and currently prevalent CNN-based model by 9.43%, 3.91%, and -0.55% for 2, 6, and 10-minute forecasts, respectively. Transfer learning experiments were conducted to test the ViT-E model’s generalisation under different climatic conditions and its performance on smaller datasets. We discovered that the weights learned from the three-year Folsom dataset in the United States could be transferred to a half-year local dataset in Nottingham, UK. Training with a dataset one-fifth the size of the original dataset achieved baseline accuracy standards and reduced training time by 80.2%. Additionally, using a dataset equivalent to only 4.5% of the original size yielded a model with less than 2% accuracy below the baseline. These findings validated the generalisation and robustness of the model’s trained weights. Finally, the ViT-E model architecture and hyperparameters were optimised and searched. Our investigation revealed that directly applying migrated deep vision models leads to redundancy in solar forecasting. We identified the best hyperparameters for ViT-E through manual hyperparameter space exploration. As a result, the model’s computational efficiency improved by 60%, and prediction performance increased by 2.7%

    Smart models to improve agrometeorological estimations and predictions

    Get PDF
    La población mundial, en continuo crecimiento, alcanzará de forma estimada los 9,7 mil millones de habitantes en el 2050. Este incremento, combinado con el aumento en los estándares de vida y la situación de emergencia climática (aumento de la temperatura, intensificación del ciclo del agua, etc.) nos enfrentan al enorme desafío de gestionar de forma sostenible los cada vez más escasos recursos disponibles. El sector agrícola tiene que afrontar retos tan importantes como la mejora en la gestión de los recursos naturales, la reducción de la degradación medioambiental o la seguridad alimentaria y nutricional. Todo ello condicionado por la escasez de agua y las condiciones de aridez: factores limitantes en la producción de cultivos. Para garantizar una producción agrícola sostenible bajo estas condiciones, es necesario que todas las decisiones que se tomen estén basadas en el conocimiento, la innovación y la digitalización de la agricultura de forma que se garantice la resiliencia de los agroecosistemas, especialmente en entornos áridos, semi-áridos y secos sub-húmedos en los que el déficit de agua es estructural. Por todo esto, el presente trabajo se centra en la mejora de la precisión de los actuales modelos agrometeorológicos, aplicando técnicas de inteligencia artificial. Estos modelos pueden proporcionar estimaciones y predicciones precisas de variables clave como la precipitación, la radiación solar y la evapotranspiración de referencia. A partir de ellas, es posible favorecer estrategias agrícolas más sostenibles, gracias a la posibilidad de reducir el consumo de agua y energía, por ejemplo. Además, se han reducido el número de mediciones requeridas como parámetros de entrada para estos modelos, haciéndolos más accesibles y aplicables en áreas rurales y países en desarrollo que no pueden permitirse el alto costo de la instalación, calibración y mantenimiento de estaciones meteorológicas automáticas completas. Este enfoque puede ayudar a proporcionar información valiosa a los técnicos, agricultores, gestores y responsables políticos de la planificación hídrica y agraria en zonas clave. Esta tesis doctoral ha desarrollado y validado nuevas metodologías basadas en inteligencia artificial que han ser vido para mejorar la precision de variables cruciales en al ámbito agrometeorológico: precipitación, radiación solar y evapotranspiración de referencia. En particular, se han modelado sistemas de predicción y rellenado de huecos de precipitación a diferentes escalas utilizando redes neuronales. También se han desarrollado modelos de estimación de radiación solar utilizando exclusivamente parámetros térmicos y validados en zonas con características climáticas similares a lugar de entrenamiento, sin necesidad de estar geográficamente en la misma región o país. Analógamente, se han desarrollado modelos de estimación y predicción de evapotranspiración de referencia a nivel local y regional utilizando también solamente datos de temperatura para todo el proceso: regionalización, entrenamiento y validación. Y finalmente, se ha creado una librería de Python de código abierto a nivel internacional (AgroML) que facilita el proceso de desarrollo y aplicación de modelos de inteligencia artificial, no solo enfocadas al sector agrometeorológico, sino también a cualquier modelo supervisado que mejore la toma de decisiones en otras áreas de interés.The world population, which is constantly growing, is estimated to reach 9.7 billion people in 2050. This increase, combined with the rise in living standards and the climate emergency situation (increase in temperature, intensification of the water cycle, etc.), presents us with the enormous challenge of managing increasingly scarce resources in a sustainable way. The agricultural sector must face important challenges such as improving natural resource management, reducing environmental degradation, and ensuring food and nutritional security. All of this is conditioned by water scarcity and aridity, limiting factors in crop production. To guarantee sustainable agricultural production under these conditions, it is necessary to based all the decision made on knowledge, innovation, and the digitization of agriculture to ensure the resilience of agroecosystems, especially in arid, semi-arid, and sub-humid dry environments where water deficit is structural. Therefore, this work focuses on improving the precision of current agrometeorological models by applying artificial intelligence techniques. These models can provide accurate estimates and predictions of key variables such as precipitation, solar radiation, and reference evapotranspiration. This way, it is possible to promote more sustainable agricultural strategies by reducing water and energy consumption, for example. In addition, the number of measurements required as input parameters for these models has been reduced, making them more accessible and applicable in rural areas and developing countries that cannot afford the high cost of installing, calibrating, and maintaining complete automatic weather stations. This approach can help provide valuable information to technicians, farmers, managers, and policy makers in key wáter and agricultural planning areas. This doctoral thesis has developed and validated new methodologies based on artificial intelligence that have been used to improve the precision of crucial variables in the agrometeorological field: precipitation, solar radiation, and reference evapotranspiration. Specifically, prediction systems and gap-filling models for precipitation at different scales have been modeled using neural networks. Models for estimating solar radiation using only thermal parameters have also been developed and validated in areas with similar climatic characteristics to the training location, without the need to be geographically in the same region or country. Similarly, models for estimating and predicting reference evapotranspiration at the local and regional level have been developed using only temperature data for the entire process: regionalization, training, and validation. Finally, an internationally open-source Python library (AgroML) has been created to facilitate the development and application of artificial intelligence models, not only focused on the agrometeorological sector but also on any supervised model that improves decision-making in other areas of interest

    Deep Learning -Powered Computational Intelligence for Cyber-Attacks Detection and Mitigation in 5G-Enabled Electric Vehicle Charging Station

    Get PDF
    An electric vehicle charging station (EVCS) infrastructure is the backbone of transportation electrification. However, the EVCS has various cyber-attack vulnerabilities in software, hardware, supply chain, and incumbent legacy technologies such as network, communication, and control. Therefore, proactively monitoring, detecting, and defending against these attacks is very important. The state-of-the-art approaches are not agile and intelligent enough to detect, mitigate, and defend against various cyber-physical attacks in the EVCS system. To overcome these limitations, this dissertation primarily designs, develops, implements, and tests the data-driven deep learning-powered computational intelligence to detect and mitigate cyber-physical attacks at the network and physical layers of 5G-enabled EVCS infrastructure. Also, the 5G slicing application to ensure the security and service level agreement (SLA) in the EVCS ecosystem has been studied. Various cyber-attacks such as distributed denial of services (DDoS), False data injection (FDI), advanced persistent threats (APT), and ransomware attacks on the network in a standalone 5G-enabled EVCS environment have been considered. Mathematical models for the mentioned cyber-attacks have been developed. The impact of cyber-attacks on the EVCS operation has been analyzed. Various deep learning-powered intrusion detection systems have been proposed to detect attacks using local electrical and network fingerprints. Furthermore, a novel detection framework has been designed and developed to deal with ransomware threats in high-speed, high-dimensional, multimodal data and assets from eccentric stakeholders of the connected automated vehicle (CAV) ecosystem. To mitigate the adverse effects of cyber-attacks on EVCS controllers, novel data-driven digital clones based on Twin Delayed Deep Deterministic Policy Gradient (TD3) Deep Reinforcement Learning (DRL) has been developed. Also, various Bruteforce, Controller clones-based methods have been devised and tested to aid the defense and mitigation of the impact of the attacks of the EVCS operation. The performance of the proposed mitigation method has been compared with that of a benchmark Deep Deterministic Policy Gradient (DDPG)-based digital clones approach. Simulation results obtained from the Python, Matlab/Simulink, and NetSim software demonstrate that the cyber-attacks are disruptive and detrimental to the operation of EVCS. The proposed detection and mitigation methods are effective and perform better than the conventional and benchmark techniques for the 5G-enabled EVCS

    An Adaptive Approach for Probabilistic Wind Power Forecasting Based on Meta-Learning

    Full text link
    This paper studies an adaptive approach for probabilistic wind power forecasting (WPF) including offline and online learning procedures. In the offline learning stage, a base forecast model is trained via inner and outer loop updates of meta-learning, which endows the base forecast model with excellent adaptability to different forecast tasks, i.e., probabilistic WPF with different lead times or locations. In the online learning stage, the base forecast model is applied to online forecasting combined with incremental learning techniques. On this basis, the online forecast takes full advantage of recent information and the adaptability of the base forecast model. Two applications are developed based on our proposed approach concerning forecasting with different lead times (temporal adaptation) and forecasting for newly established wind farms (spatial adaptation), respectively. Numerical tests were conducted on real-world wind power data sets. Simulation results validate the advantages in adaptivity of the proposed methods compared with existing alternatives

    Machine learning for the sustainable energy transition: a data-driven perspective along the value chain from manufacturing to energy conversion

    Get PDF
    According to the special report Global Warming of 1.5 °C of the IPCC, climate action is not only necessary but more than ever urgent. The world is witnessing rising sea levels, heat waves, events of flooding, droughts, and desertification resulting in the loss of lives and damage to livelihoods, especially in countries of the Global South. To mitigate climate change and commit to the Paris agreement, it is of the uttermost importance to reduce greenhouse gas emissions coming from the most emitting sector, namely the energy sector. To this end, large-scale penetration of renewable energy systems into the energy market is crucial for the energy transition toward a sustainable future by replacing fossil fuels and improving access to energy with socio-economic benefits. With the advent of Industry 4.0, Internet of Things technologies have been increasingly applied to the energy sector introducing the concept of smart grid or, more in general, Internet of Energy. These paradigms are steering the energy sector towards more efficient, reliable, flexible, resilient, safe, and sustainable solutions with huge environmental and social potential benefits. To realize these concepts, new information technologies are required, and among the most promising possibilities are Artificial Intelligence and Machine Learning which in many countries have already revolutionized the energy industry. This thesis presents different Machine Learning algorithms and methods for the implementation of new strategies to make renewable energy systems more efficient and reliable. It presents various learning algorithms, highlighting their advantages and limits, and evaluating their application for different tasks in the energy context. In addition, different techniques are presented for the preprocessing and cleaning of time series, nowadays collected by sensor networks mounted on every renewable energy system. With the possibility to install large numbers of sensors that collect vast amounts of time series, it is vital to detect and remove irrelevant, redundant, or noisy features, and alleviate the curse of dimensionality, thus improving the interpretability of predictive models, speeding up their learning process, and enhancing their generalization properties. Therefore, this thesis discussed the importance of dimensionality reduction in sensor networks mounted on renewable energy systems and, to this end, presents two novel unsupervised algorithms. The first approach maps time series in the network domain through visibility graphs and uses a community detection algorithm to identify clusters of similar time series and select representative parameters. This method can group both homogeneous and heterogeneous physical parameters, even when related to different functional areas of a system. The second approach proposes the Combined Predictive Power Score, a method for feature selection with a multivariate formulation that explores multiple sub-sets of expanding variables and identifies the combination of features with the highest predictive power over specified target variables. This method proposes a selection algorithm for the optimal combination of variables that converges to the smallest set of predictors with the highest predictive power. Once the combination of variables is identified, the most relevant parameters in a sensor network can be selected to perform dimensionality reduction. Data-driven methods open the possibility to support strategic decision-making, resulting in a reduction of Operation & Maintenance costs, machine faults, repair stops, and spare parts inventory size. Therefore, this thesis presents two approaches in the context of predictive maintenance to improve the lifetime and efficiency of the equipment, based on anomaly detection algorithms. The first approach proposes an anomaly detection model based on Principal Component Analysis that is robust to false alarms, can isolate anomalous conditions, and can anticipate equipment failures. The second approach has at its core a neural architecture, namely a Graph Convolutional Autoencoder, which models the sensor network as a dynamical functional graph by simultaneously considering the information content of individual sensor measurements (graph node features) and the nonlinear correlations existing between all pairs of sensors (graph edges). The proposed neural architecture can capture hidden anomalies even when the turbine continues to deliver the power requested by the grid and can anticipate equipment failures. Since the model is unsupervised and completely data-driven, this approach can be applied to any wind turbine equipped with a SCADA system. When it comes to renewable energies, the unschedulable uncertainty due to their intermittent nature represents an obstacle to the reliability and stability of energy grids, especially when dealing with large-scale integration. Nevertheless, these challenges can be alleviated if the natural sources or the power output of renewable energy systems can be forecasted accurately, allowing power system operators to plan optimal power management strategies to balance the dispatch between intermittent power generations and the load demand. To this end, this thesis proposes a multi-modal spatio-temporal neural network for multi-horizon wind power forecasting. In particular, the model combines high-resolution Numerical Weather Prediction forecast maps with turbine-level SCADA data and explores how meteorological variables on different spatial scales together with the turbines' internal operating conditions impact wind power forecasts. The world is undergoing a third energy transition with the main goal to tackle global climate change through decarbonization of the energy supply and consumption patterns. This is not only possible thanks to global cooperation and agreements between parties, power generation systems advancements, and Internet of Things and Artificial Intelligence technologies but also necessary to prevent the severe and irreversible consequences of climate change that are threatening life on the planet as we know it. This thesis is intended as a reference for researchers that want to contribute to the sustainable energy transition and are approaching the field of Artificial Intelligence in the context of renewable energy systems

    Äriprotsessi tulemuste ennustav ja korralduslik seire

    Get PDF
    Viimastel aastatel on erinevates valdkondades tegutsevad ettevõtted üles näidanud kasvavat huvi masinõppel põhinevate rakenduste kasutusele võtmiseks. Muuhulgas otsitakse võimalusi oma äriprotsesside efektiivsuse tõstmiseks, kasutades ennustusmudeleid protsesside jooksvaks seireks. Sellised ennustava protsessiseire meetodid võtavad sisendiks sündmuslogi, mis koosneb hulgast lõpetatud äriprotsessi juhtumite sündmusjadadest, ning kasutavad masinõppe algoritme ennustusmudelite treenimiseks. Saadud mudelid teevad ennustusi lõpetamata (antud ajahetkel aktiivsete) protsessijuhtumite jaoks, võttes sisendiks sündmuste jada, mis selle hetkeni on toimunud ning ennustades kas järgmist sündmust antud juhtumis, juhtumi lõppemiseni jäänud aega või instantsi lõpptulemust. Lõpptulemusele orienteeritud ennustava protsessiseire meetodid keskenduvad ennustamisele, kas protsessijuhtum lõppeb soovitud või ebasoovitava lõpptulemusega. Süsteemi kasutaja saab ennustuste alusel otsustada, kas sekkuda antud protsessijuhtumisse või mitte, eesmärgiga ära hoida ebasoovitavat lõpptulemust või leevendada selle negatiivseid tagajärgi. Erinevalt puhtalt ennustavatest süsteemidest annavad korralduslikud protsessiseire meetodid kasutajale ka soovitusi, kas ja kuidas antud juhtumisse sekkuda, eesmärgiga optimeerida mingit kindlat kasulikkusfunktsiooni. Käesolev doktoritöö uurib, kuidas treenida, hinnata ja kasutada ennustusmudeleid äriprotsesside lõpptulemuste ennustava ja korraldusliku seire raames. Doktoritöö pakub välja taksonoomia olemasolevate meetodite klassifitseerimiseks ja võrdleb neid katseliselt. Lisaks pakub töö välja raamistiku tekstiliste andmete kasutamiseks antud ennustusmudelites. Samuti pakume välja ennustuste ajalise stabiilsuse mõiste ning koostame raamistiku korralduslikuks protsessiseireks, mis annab kasutajatele soovitusi, kas protsessi sekkuda või mitte. Katsed näitavad, et väljapakutud lahendused täiendavad olemasolevaid meetodeid ning aitavad kaasa ennustava protsessiseire süsteemide rakendamisele reaalsetes süsteemides.Recent years have witnessed a growing adoption of machine learning techniques for business improvement across various fields. Among other emerging applications, organizations are exploiting opportunities to improve the performance of their business processes by using predictive models for runtime monitoring. Such predictive process monitoring techniques take an event log (a set of completed business process execution traces) as input and use machine learning techniques to train predictive models. At runtime, these techniques predict either the next event, the remaining time, or the final outcome of an ongoing case, given its incomplete execution trace consisting of the events performed up to the present moment in the given case. In particular, a family of techniques called outcome-oriented predictive process monitoring focuses on predicting whether a case will end with a desired or an undesired outcome. The user of the system can use the predictions to decide whether or not to intervene, with the purpose of preventing an undesired outcome or mitigating its negative effects. Prescriptive process monitoring systems go beyond purely predictive ones, by not only generating predictions but also advising the user if and how to intervene in a running case in order to optimize a given utility function. This thesis addresses the question of how to train, evaluate, and use predictive models for predictive and prescriptive monitoring of business process outcomes. The thesis proposes a taxonomy and performs a comparative experimental evaluation of existing techniques in the field. Moreover, we propose a framework for incorporating textual data to predictive monitoring systems. We introduce the notion of temporal stability to evaluate these systems and propose a prescriptive process monitoring framework for advising users if and how to act upon the predictions. The results suggest that the proposed solutions complement the existing techniques and can be useful for practitioners in implementing predictive process monitoring systems in real life

    Modelling blood flow in patients with heart valve disease using deep learning: A computationally efficient method to expand diagnostic capabilities in clinical routine

    Get PDF
    Introduction: The computational modelling of blood flow is known to provide vital hemodynamic parameters for diagnosis and treatment-support for patients with valvular heart disease. However, most diagnosis/treatment-support solutions based on flow modelling proposed utilize time- and resource-intensive computational fluid dynamics (CFD) and are therefore difficult to implement into clinical practice. In contrast, deep learning (DL) algorithms provide results quickly with little need for computational power. Thus, modelling blood flow with DL instead of CFD may substantially enhances the usability of flow modelling-based diagnosis/treatment support in clinical routine. In this study, we propose a DL-based approach to compute pressure and wall-shear-stress (WSS) in the aorta and aortic valve of patients with aortic stenosis (AS). Methods: A total of 103 individual surface models of the aorta and aortic valve were constructed from computed tomography data of AS patients. Based on these surface models, a total of 267 patient-specific, steady-state CFD simulations of aortic flow under various flow rates were performed. Using this simulation data, an artificial neural network (ANN) was trained to compute spatially resolved pressure and WSS using a centerline-based representation. An unseen test subset of 23 cases was used to compare both methods. Results: ANN and CFD-based computations agreed well with a median relative difference between both methods of 6.0% for pressure and 4.9% for wall-shear-stress. Demonstrating the ability of DL to compute clinically relevant hemodynamic parameters for AS patients, this work presents a possible solution to facilitate the introduction of modelling-based treatment support into clinical practice
    corecore