120 research outputs found

    On power system automation: a Digital Twin-centric framework for the next generation of energy management systems

    Get PDF
    The ubiquitous digital transformation also influences power system operation. Emerging real-time applications in information (IT) and operational technology (OT) provide new opportunities to address the increasingly demanding power system operation imposed by the progressing energy transition. This IT/OT convergence is epitomised by the novel Digital Twin (DT) concept. By integrating sensor data into analytical models and aligning the model states with the observed system, a power system DT can be created. As a result, a validated high-fidelity model is derived, which can be applied within the next generation of energy management systems (EMS) to support power system operation. By providing a consistent and maintainable data model, the modular DT-centric EMS proposed in this work addresses several key requirements of modern EMS architectures. It increases the situation awareness in the control room, enables the implementation of model maintenance routines, and facilitates automation approaches, while raising the confidence into operational decisions deduced from the validated model. This gain in trust contributes to the digital transformation and enables a higher degree of power system automation. By considering operational planning and power system operation processes, a direct link to practice is ensured. The feasibility of the concept is examined by numerical case studies.The electrical power system is in the process of an extensive transformation. Driven by the energy transition towards renewable energy resources, many conventional power plants in Germany have already been decommissioned or will be decommissioned within the next decade. Among other things, these changes lead to an increased utilisation of power transmission equipment, and an increasing number of complex dynamic phenomena. The resulting system operation closer to physical boundaries leads to an increased susceptibility to disturbances, and to a reduced time span to react to critical contingencies and perturbations. In consequence, the task to operate the power system will become increasingly demanding. As some reactions to disturbances may be required within timeframes that exceed human capabilities, these developments are intrinsic drivers to enable a higher degree of automation in power system operation. This thesis proposes a framework to create a modular Digital Twin-centric energy management system. It enables the provision of validated and trustworthy models built from knowledge about the power system derived from physical laws, and process data. As the interaction of information and operational technologies is combined in the concept of the Digital Twin, it can serve as a framework for future energy management systems including novel applications for power system monitoring and control, which consider power system dynamics. To provide a validated high-fidelity dynamic power system model, time-synchronised phasor measurements of high-resolution are applied for validation and parameter estimation. This increases the trust into the underlying power system model as well as the confidence into operational decisions derived from advanced analytic applications such as online dynamic security assessment. By providing an appropriate, consistent, and maintainable data model, the framework addresses several key requirements of modern energy management system architectures, while enabling the implementation of advanced automation routines and control approaches. Future energy management systems can provide an increased observability based on the proposed architecture, whereby the situational awareness of human operators in the control room can be improved. In further development stages, cognitive systems can be applied that are able to learn from the data provided, e.g., machine learning based analytical functions. Thus, the framework enables a higher degree of power system automation, as well as the deployment of assistance and decision support functions for power system operation pointing towards a higher degree of automation in power system operation. The framework represents a contribution to the digital transformation of power system operation and facilitates a successful energy transition. The feasibility of the concept is examined by case studies in form of numerical simulations to provide a proof of concept.Das elektrische Energiesystem befindet sich in einem umfangreichen Transformations-prozess. Durch die voranschreitende Energiewende und den zunehmenden Einsatz erneuerbarer Energieträger sind in Deutschland viele konventionelle Kraftwerke bereits stillgelegt worden oder werden in den nächsten Jahren stillgelegt. Diese Veränderungen führen unter anderem zu einer erhöhten Betriebsmittelauslastung sowie zu einer verringerten Systemträgheit und somit zu einer zunehmenden Anzahl komplexer dynamischer Phänomene im elektrischen Energiesystem. Der Betrieb des Systems näher an den physikalischen Grenzen führt des Weiteren zu einer erhöhten Störanfälligkeit und zu einer verkürzten Zeitspanne, um auf kritische Ereignisse und Störungen zu reagieren. Infolgedessen wird die Aufgabe, das Stromnetz zu betreiben anspruchsvoller. Insbesondere dort wo Reaktionszeiten erforderlich sind, welche die menschlichen Fähigkeiten übersteigen sind die zuvor genannten Veränderungen intrinsische Treiber hin zu einem höheren Automatisierungsgrad in der Netzbetriebs- und Systemführung. Aufkommende Echtzeitanwendungen in den Informations- und Betriebstechnologien und eine zunehmende Menge an hochauflösenden Sensordaten ermöglichen neue Ansätze für den Entwurf und den Betrieb von cyber-physikalischen Systemen. Ein vielversprechender Ansatz, der in jüngster Zeit in diesem Zusammenhang diskutiert wurde, ist das Konzept des so genannten Digitalen Zwillings. Da das Zusammenspiel von Informations- und Betriebstechnologien im Konzept des Digitalen Zwillings vereint wird, kann es als Grundlage für eine zukünftige Leitsystemarchitektur und neuartige Anwendungen der Leittechnik herangezogen werden. In der vorliegenden Arbeit wird ein Framework entwickelt, welches einen Digitalen Zwilling in einer neuartigen modularen Leitsystemarchitektur für die Aufgabe der Überwachung und Steuerung zukünftiger Energiesysteme zweckdienlich einsetzbar macht. In Ergänzung zu den bereits vorhandenen Funktionen moderner Netzführungssysteme unterstützt das Konzept die Abbildung der Netzdynamik auf Basis eines dynamischen Netzmodells. Um eine realitätsgetreue Abbildung der Netzdynamik zu ermöglichen, werden zeitsynchrone Raumzeigermessungen für die Modellvalidierung und Modellparameterschätzung herangezogen. Dies erhöht die Aussagekraft von Sicherheitsanalysen, sowie das Vertrauen in die Modelle mit denen operative Entscheidungen generiert werden. Durch die Bereitstellung eines validierten, konsistenten und wartbaren Datenmodells auf der Grundlage von physikalischen Gesetzmäßigkeiten und während des Betriebs gewonnener Prozessdaten, adressiert der vorgestellte Architekturentwurf mehrere Schlüsselan-forderungen an moderne Netzleitsysteme. So ermöglicht das Framework einen höheren Automatisierungsgrad des Stromnetzbetriebs sowie den Einsatz von Entscheidungs-unterstützungsfunktionen bis hin zu vertrauenswürdigen Assistenzsystemen auf Basis kognitiver Systeme. Diese Funktionen können die Betriebssicherheit erhöhen und stellen einen wichtigen Beitrag zur Umsetzung der digitalen Transformation des Stromnetzbetriebs, sowie zur erfolgreichen Umsetzung der Energiewende dar. Das vorgestellte Konzept wird auf der Grundlage numerischer Simulationen untersucht, wobei die grundsätzliche Machbarkeit anhand von Fallstudien nachgewiesen wird

    Multi-layered model of individual HIV infection progression and mechanisms of phenotypical expression

    Get PDF
    Cite as: Perrin, Dimitri (2008) Multi-layered model of individual HIV infection progression and mechanisms of phenotypical expression. PhD thesis, Dublin City University

    Big data analytics towards predictive maintenance at the INFN-CNAF computing centre

    Get PDF
    La Fisica delle Alte Energie (HEP) è da lungo tra i precursori nel gestire e processare enormi dataset scientifici e nell'operare alcuni tra i più grandi data centre per applicazioni scientifiche. HEP ha sviluppato una griglia computazionale (Grid) per il calcolo al Large Hadron Collider (LHC) del CERN di Ginevra, che attualmente coordina giornalmente le operazioni di calcolo su oltre 800k processori in 170 centri di calcolo e gestendo mezzo Exabyte di dati su disco distribuito in 5 continenti. Nelle prossime fasi di LHC, soprattutto in vista di Run-4, il quantitativo di dati gestiti dai centri di calcolo aumenterà notevolmente. In questo contesto, la HEP Software Foundation ha redatto un Community White Paper (CWP) che indica il percorso da seguire nell'evoluzione del software moderno e dei modelli di calcolo in preparazione alla fase cosiddetta di High Luminosity di LHC. Questo lavoro ha individuato in tecniche di Big Data Analytics un enorme potenziale per affrontare le sfide future di HEP. Uno degli sviluppi riguarda la cosiddetta Operation Intelligence, ovvero la ricerca di un aumento nel livello di automazione all'interno dei workflow. Questo genere di approcci potrebbe portare al passaggio da un sistema di manutenzione reattiva ad uno, più evoluto, di manutenzione predittiva o addirittura prescrittiva. La tesi presenta il lavoro fatto in collaborazione con il centro di calcolo dell'INFN-CNAF per introdurre un sistema di ingestione, organizzazione e processing dei log del centro su una piattaforma di Big Data Analytics unificata, al fine di prototipizzare un modello di manutenzione predittiva per il centro. Questa tesi contribuisce a tale progetto con lo sviluppo di un algoritmo di clustering dei messaggi di log basato su misure di similarità tra campi testuali, per superare il limite connesso alla verbosità ed eterogeneità dei log raccolti dai vari servizi operativi 24/7 al centro

    Scalable allocation of safety integrity levels in automotive systems

    Get PDF
    The allocation of safety integrity requirements is an important problem in modern safety engineering. It is necessary to find an allocation that meets system level safety integrity targets and that is simultaneously cost-effective. As safety-critical systems grow in size and complexity, the problem becomes too difficult to be solved in the context of a manual process. Although this thesis addresses the generic problem of safety integrity requirements allocation, the automotive industry is taken as an application example.Recently, the problem has been partially addressed with the use of model-based safety analysis techniques and exact optimisation methods. However, usually, allocation cost impacts are either not directly taken into account or simple, linear cost models are considered; furthermore, given the combinatorial nature of the problem, applicability of the exact techniques to large problems is not a given. This thesis argues that it is possible to effectively and relatively efficiently solve the allocation problem using a mixture of model-based safety analysis and metaheuristic optimisation techniques. Since suitable model-based safety analysis techniques were already known at the start of this project (e.g. HiP-HOPS), the research focuses on the optimisation task.The thesis reviews the process of safety integrity requirements allocation and presents relevant related work. Then, the state-of-the-art of metaheuristic optimisation is analysed and a series of techniques, based on Genetic Algorithms, the Particle Swarm Optimiser and Tabu Search are developed. These techniques are applied to a set of problems based on complex engineering systems considering the use of different cost functions. The most promising method is selected for investigation of performance improvements and usability enhancements. Overall, the results show the feasibility of the approach and suggest good scalability whilst also pointing towards areas for improvement

    Efficient random set uncertainty quantification by means of advanced sampling techniques

    Get PDF
    In this dissertation, Random Sets and Advanced Sampling techniques are combined for general and efficient uncertainty quantification. Random Sets extend the traditional probabilistic framework, as they also comprise imprecision to account for scarce data, lack of knowledge, vagueness, subjectivity, etc. The general attitude of Random Sets to include different kinds of uncertainty is paid to a very high computational price. In fact, Random Sets requires a min-max convolution for each sample picked by the Monte Carlo method. The speed of the min-max convolution can be sensibly increased when the system response relationship is known in analytical form. However, in a general multidisciplinary design context, the system response is very often treated as a “black box”; thus, the convolution requires the adoption of evolutionary or stochastic algorithms, which need to be deployed for each Monte Carlo sample. Therefore, the availability of very efficient sampling techniques is paramount to allow Random Sets to be applied to engineering problems. In this dissertation, Advanced Line Sampling methods have been generalised and extended to include Random Sets. Advanced Sampling techniques make the estimation of quantiles on relevant probabilities extremely efficient, by requiring significantly fewer numbers of samples compared to standard Monte Carlo methods. In particular, the Line Sampling method has been enhanced to link well to the Random Set representation. These developments comprise line search, line selection, direction adaptation, and data buffering. The enhanced efficiency of Line Sampling is demonstrated by means of numerical and large scale finite element examples. With the enhanced algorithm, the connection between Line Sampling and the generalised uncertainty model has been possible, both in a Double Loop and in a Random Set approach. The presented computational strategies have been implemented in the open source general purpose software for uncertainty quantification, OpenCossan. The general reach of the proposed strategy is demonstrated by means of applications to structural reliability of a finite element model, to preventive maintenance, and to the NASA Langley multidisciplinary uncertainty quantification challenge

    Comparative Analysis of Machine Learning Models for Predictive Maintenance of Ball Bearing Systems

    Get PDF
    In the era of Industry 4.0 and beyond, ball bearings remain an important part of industrial systems. The failure of ball bearings can lead to plant downtime, inefficient operations, and significant maintenance expenses. Although conventional preventive maintenance mechanisms like time-based maintenance, routine inspections, and manual data analysis provide a certain level of fault prevention, they are often reactive, time-consuming, and imprecise. On the other hand, machine learning algorithms can detect anomalies early, process vast amounts of data, continuously improve in almost real time, and, in turn, significantly enhance the efficiency of modern industrial systems. In this work, we compare different machine learning and deep learning techniques to optimise the predictive maintenance of ball bearing systems, which, in turn, will reduce the downtime and improve the efficiency of current and future industrial systems. For this purpose, we evaluate and compare classification algorithms like Logistic Regression and Support Vector Machine, as well as ensemble algorithms like Random Forest and Extreme Gradient Boost. We also explore and evaluate long short-term memory, which is a type of recurrent neural network. We assess and compare these models in terms of their accuracy, precision, recall, F1 scores, and computation requirement. Our comparison results indicate that Extreme Gradient Boost gives the best trade-off in terms of overall performance and computation time. For a dataset of 2155 vibration signals, Extreme Gradient Boost gives an accuracy of 96.61% while requiring a training time of only 0.76 s. Moreover, among the techniques that give an accuracy greater than 80%, Extreme Gradient Boost also gives the best accuracy-to-computation-time ratio

    Data-driven prognostics and logistics optimisation:A deep learning journey

    Get PDF

    Data-driven prognostics and logistics optimisation:A deep learning journey

    Get PDF
    corecore