1,159 research outputs found

    Integration of Novel Sensors and Machine Learning for Predictive Maintenance in Medium Voltage Switchgear to Enable the Energy and Mobility Revolutions

    Get PDF
    The development of renewable energies and smart mobility has profoundly impacted the future of the distribution grid. An increasing bidirectional energy flow stresses the assets of the distribution grid, especially medium voltage switchgear. This calls for improved maintenance strategies to prevent critical failures. Predictive maintenance, a maintenance strategy relying on current condition data of assets, serves as a guideline. Novel sensors covering thermal, mechanical, and partial discharge aspects of switchgear, enable continuous condition monitoring of some of the most critical assets of the distribution grid. Combined with machine learning algorithms, the demands put on the distribution grid by the energy and mobility revolutions can be handled. In this paper, we review the current state-of-the-art of all aspects of condition monitoring for medium voltage switchgear. Furthermore, we present an approach to develop a predictive maintenance system based on novel sensors and machine learning. We show how the existing medium voltage grid infrastructure can adapt these new needs on an economic scale

    Predictive maintenance of electrical grid assets: internship at EDP Distribuição - Energia S.A

    Get PDF
    Internship Report presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business IntelligenceThis report will describe the activities developed during an internship at EDP Distribuição, focusing on a Predictive Maintenance analytics project directed at high voltage electrical grid assets including Overhead Lines, Power Transformers and Circuit Breakers. The project’s main goal is to support EDP’s asset management processes by improving maintenance and investing planning. The project’s main deliverables are the Probability of Failure metric that forecast asset failures 15 days ahead of time, estimated through supervised machine learning models; the Health Index metric that indicates asset’s current state and condition, implemented though the Ofgem methodology; and two asset management dashboards. The project was implemented by an external service provider, a consultant company, and during the internship it was possible to integrate the team, and participate in the development activities

    Developing Efficient Strategies For Global Sensitivity Analysis Of Complex Environmental Systems Models

    Get PDF
    Complex Environmental Systems Models (CESMs) have been developed and applied as vital tools to tackle the ecological, water, food, and energy crises that humanity faces, and have been used widely to support decision-making about management of the quality and quantity of Earth’s resources. CESMs are often controlled by many interacting and uncertain parameters, and typically integrate data from multiple sources at different spatio-temporal scales, which make them highly complex. Global Sensitivity Analysis (GSA) techniques have proven to be promising for deepening our understanding of the model complexity and interactions between various parameters and providing helpful recommendations for further model development and data acquisition. Aside from the complexity issue, the computationally expensive nature of the CESMs precludes effective application of the existing GSA techniques in quantifying the global influence of each parameter on variability of the CESMs’ outputs. This is because a comprehensive sensitivity analysis often requires performing a very large number of model runs. Therefore, there is a need to break down this barrier by the development of more efficient strategies for sensitivity analysis. The research undertaken in this dissertation is mainly focused on alleviating the computational burden associated with GSA of the computationally expensive CESMs through developing efficiency-increasing strategies for robust sensitivity analysis. This is accomplished by: (1) proposing an efficient sequential sampling strategy for robust sampling-based analysis of CESMs; (2) developing an automated parameter grouping strategy of high-dimensional CESMs, (3) introducing a new robustness measure for convergence assessment of the GSA methods; and (4) investigating time-saving strategies for handling simulation failures/crashes during the sensitivity analysis of computationally expensive CESMs. This dissertation provides a set of innovative numerical techniques that can be used in conjunction with any GSA algorithm and be integrated in model building and systems analysis procedures in any field where models are used. A range of analytical test functions and environmental models with varying complexity and dimensionality are utilized across this research to test the performance of the proposed methods. These methods, which are embedded in the VARS–TOOL software package, can also provide information useful for diagnostic testing, parameter identifiability analysis, model simplification, model calibration, and experimental design. They can be further applied to address a range of decision making-related problems such as characterizing the main causes of risk in the context of probabilistic risk assessment and exploring the CESMs’ sensitivity to a wide range of plausible future changes (e.g., hydrometeorological conditions) in the context of scenario analysis

    The immune microenvironment in mantle cell lymphoma : Targeted liquid and spatial proteomic analyses

    Get PDF
    The complex interplay of the tumour and immune cells affects tumour growth, progression, and response to treatment. Restorationof effective immune response forms the basis of onco-immunology, which further enabled the development of immunotherapy. Inthe era of precision medicine, pin-pointing patient biological heterogeneity especially in relation to patient-specific immunemicroenvironment is a necessity for the discovery of novel biomarkers and for development of patient stratification tools for targetedtherapeutics. Mantle cell lymphoma (MCL) is a rare and aggressive subtype of B-cell lymphoma with poor survival and high relapserates. Previous investigations of MCL have largely focused on the tumour itself and explorations of the immune microenvironmenthave been limited. This thesis and the included five papers, investigates multiple aspects of the immune microenvironment withrespect to proteomic analysis performed on tissue and liquid biopsies of diagnostic and relapsed/refractory (R/R) MCL cohorts.Analyses based on liquid biopsies (serum) in particular are relevant for aggressive cases such as in relapse, where invasiveprocedures for extracting tissues is not recommended. Thus, paper I-II probes the possibility of using serum for treatment andoutcome-associated biomarker discovery in R/R MCL, using a targeted affinity-based protein microarray platform quantifyingimmune-regulatory and tumor-secretory proteins in sera. Analysis performed in paper I using pre-treatment samples, identifies 11-plex biomarker signature (RIS – relapsed immune signature) associated with overall survival. Further integration of RIS with mantlecell lymphoma international prognostic index (MIPI) led to the development of MIPIris index for the stratification of R/R MCL intothree risk groups. Moreover, longitudinal analysis can be important in understanding how patient respond to treatment and thiscan further guide therapeutic interventions. Thus, paper II is a follow-up study wherein longitudinal analyses was performed onpaired samples collected at pre-treatment (baseline) and after three months of chemo-immunotherapy (on-treatment). We showhow genetic aberrations can influence systemic profiles and thus integrating genetic information can be crucial for treatmentselection. Furthermore, we observe that the inter-patient heterogeneity associated with absolute values can be circumvented byusing velocity of change to capture general changes over time in groups of patients. Thus, using velocity of change in serumproteins between pre- and on-treatment samples identified response biomarkers associated with minimal residual disease andprogression. While exploratory analysis using high dimensional omics-based data can be important for accelerating discovery,translating such information for clinical utility is a necessity. Thus, in paper III, we show how serum quantification can be usedcomplementary tissue-identified prognostic biomarkers and this can enable faster clinical implementation. Presence of CD163+M2-like macrophages has shown to be associated with poor outcome in MCL tissues. We show that higher expression of sCD163levels in sera quantified using ELISA, is also associated with poor outcome in diagnostic and relapsed MCL. Furthermore, wesuggest a cut-off for sCD163 levels that can be used for clinical utility. Further exploration of the dynamic interplay of tumourimmunemicroenvironment is now possible using spatial resolved omics for tissue-based analysis. Thus, in paper IV and V, weanalyse cell-type specific proteomic data collected from tumour and immune cells using GeoMxℱ digital spatial profiler. In paperIV, we show that presence as well as spatial localization of CD163+ macrophage with respect to tumour regions impactsmacrophage phenotypic profiles. Further modulation in the profile of surrounding tumour and T-cells is observed whenmacrophages are present in the vicinity. Based on this analysis, we suggest MAPK pathway as a potential therapeutic target intumours with CD163+ macrophages. Immune composition can be defined not just by the type of cells, but also with respect tofrequency and spatial localization and this is explored in paper V with respect to T-cell subtypes. Thus, in paper V, we optimizeda workflow of multiplexed immunofluorescence image segmentation that allowed us to extract cell metrics for four subtypes ofCD3+ T-cells. Using this data, we show that higher infiltration of T-cells is associated with a positive outcome in MCL. Moreover,by combining image derived metrics to cell specific spatial omics data, we were able to identify immunosuppressivemicroenvironment associated with highly infiltrated tumours and suggests new potential targets of immunotherapy with respect toIDO1, GITR and STING. In conclusion, this thesis explores systemic and tumor-associated immune microenvironment in MCL, fordefining patient heterogeneity, developing methods of patient stratification and for identifying novel and actionable biomarkers

    A Computational Framework for Efficient Reliability Analysis of Complex Networks

    Get PDF
    With the growing scale and complexity of modern infrastructure networks comes the challenge of developing efficient and dependable methods for analysing their reliability. Special attention must be given to potential network interdependencies as disregarding these can lead to catastrophic failures. Furthermore, it is of paramount importance to properly treat all uncertainties. The survival signature is a recent development built to effectively analyse complex networks that far exceeds standard techniques in several important areas. Its most distinguishing feature is the complete separation of system structure from probabilistic information. Because of this, it is possible to take into account a variety of component failure phenomena such as dependencies, common causes of failure, and imprecise probabilities without reevaluating the network structure. This cumulative dissertation presents several key improvements to the survival signature ecosystem focused on the structural evaluation of the system as well as the modelling of component failures. A new method is presented in which (inter)-dependencies between components and networks are modelled using vine copulas. Furthermore, aleatory and epistemic uncertainties are included by applying probability boxes and imprecise copulas. By leveraging the large number of available copula families it is possible to account for varying dependent effects. The graph-based design of vine copulas synergizes well with the typical descriptions of network topologies. The proposed method is tested on a challenging scenario using the IEEE reliability test system, demonstrating its usefulness and emphasizing the ability to represent complicated scenarios with a range of dependent failure modes. The numerical effort required to analytically compute the survival signature is prohibitive for large complex systems. This work presents two methods for the approximation of the survival signature. In the first approach system configurations of low interest are excluded using percolation theory, while the remaining parts of the signature are estimated by Monte Carlo simulation. The method is able to accurately approximate the survival signature with very small errors while drastically reducing computational demand. Several simple test systems, as well as two real-world situations, are used to show the accuracy and performance. However, with increasing network size and complexity this technique also reaches its limits. A second method is presented where the numerical demand is further reduced. Here, instead of approximating the whole survival signature only a few strategically selected values are computed using Monte Carlo simulation and used to build a surrogate model based on normalized radial basis functions. The uncertainty resulting from the approximation of the data points is then propagated through an interval predictor model which estimates bounds for the remaining survival signature values. This imprecise model provides bounds on the survival signature and therefore the network reliability. Because a few data points are sufficient to build the interval predictor model it allows for even larger systems to be analysed. With the rising complexity of not just the system but also the individual components themselves comes the need for the components to be modelled as subsystems in a system-of-systems approach. A study is presented, where a previously developed framework for resilience decision-making is adapted to multidimensional scenarios in which the subsystems are represented as survival signatures. The survival signature of the subsystems can be computed ahead of the resilience analysis due to the inherent separation of structural information. This enables efficient analysis in which the failure rates of subsystems for various resilience-enhancing endowments are calculated directly from the survival function without reevaluating the system structure. In addition to the advancements in the field of survival signature, this work also presents a new framework for uncertainty quantification developed as a package in the Julia programming language called UncertaintyQuantification.jl. Julia is a modern high-level dynamic programming language that is ideal for applications such as data analysis and scientific computing. UncertaintyQuantification.jl was built from the ground up to be generalised and versatile while remaining simple to use. The framework is in constant development and its goal is to become a toolbox encompassing state-of-the-art algorithms from all fields of uncertainty quantification and to serve as a valuable tool for both research and industry. UncertaintyQuantification.jl currently includes simulation-based reliability analysis utilising a wide range of sampling schemes, local and global sensitivity analysis, and surrogate modelling methodologies

    Machine Learning in Robotic Navigation:Deep Visual Localization and Adaptive Control

    Get PDF
    The work conducted in this thesis contributes to the robotic navigation field by focusing on different machine learning solutions: supervised learning with (deep) neural networks, unsupervised learning, and reinforcement learning.First, we propose a semi-supervised machine learning approach that can dynamically update the robot controller's parameters using situational analysis through feature extraction and unsupervised clustering. The results show that the robot can adapt to the changes in its surroundings, resulting in a thirty percent improvement in navigation speed and stability.Then, we train multiple deep neural networks for estimating the robot's position in the environment using ground truth information provided by a classical localization and mapping approach. We prepare two image-based localization datasets in 3D simulation and compare the results of a traditional multilayer perceptron, a stacked denoising autoencoder, and a convolutional neural network (CNN). The experiment results show that our proposed inception based CNNs without pooling layers perform very well in all the environments. Finally, we propose a two-stage learning framework for visual navigation in which the experience of the agent during exploration of one goal is shared to learn to navigate to other goals. The multi-goal Q-function learns to traverse the environment by using the provided discretized map. Transfer learning is applied to the multi-goal Q-function from a maze structure to a 2D simulator and is finally deployed in a 3D simulator where the robot uses the estimated locations from the position estimator deep CNNs. The results show a significant improvement when multi-goal reinforcement learning is used

    Adaptive detection and tracking using multimodal information

    Get PDF
    This thesis describes work on fusing data from multiple sources of information, and focuses on two main areas: adaptive detection and adaptive object tracking in automated vision scenarios. The work on adaptive object detection explores a new paradigm in dynamic parameter selection, by selecting thresholds for object detection to maximise agreement between pairs of sources. Object tracking, a complementary technique to object detection, is also explored in a multi-source context and an efficient framework for robust tracking, termed the Spatiogram Bank tracker, is proposed as a means to overcome the difficulties of traditional histogram tracking. As well as performing theoretical analysis of the proposed methods, specific example applications are given for both the detection and the tracking aspects, using thermal infrared and visible spectrum video data, as well as other multi-modal information sources

    Measurement uncertainty in machine learning - uncertainty propagation and influence on performance

    Get PDF
    Industry 4.0 is based on the intelligent networking of machines and processes in industry and makes a decisive contribution to increasing competitiveness. For this, reliable measurements of used sensors and sensor systems are essential. Metrology deals with the definition of internationally accepted measurement units and standards. In order to internationally compare measurement results, the Guide to the Expression of Uncertainty in Measurement (GUM) provides the basis for evaluating and interpreting measurement uncertainty. At the same time, measurement uncertainty also provides data quality information, which is important when machine learning is applied in the digitalized factory. However, measurement uncertainty in line with the GUM has been mostly neglected in machine learning or only estimated by cross-validation. Therefore, this dissertation aims to combine measurement uncertainty based on the principles of the GUM and machine learning. For performing machine learning, a data pipeline that fuses raw data from different measurement systems and determines measurement uncertainties from dynamic calibration information is presented. Furthermore, a previously published automated toolbox for machine learning is extended to include uncertainty propagation based on the GUM and its supplements. Using this uncertainty-aware toolbox, the influence of measurement uncertainty on machine learning results is investigated, and approaches to improve these results are discussed.Industrie 4.0 basiert auf der intelligenten Vernetzung von Maschinen und Prozessen und trĂ€gt zur Steigerung der WettbewerbsfĂ€higkeit entscheidend bei. ZuverlĂ€ssige Messungen der eingesetzten Sensoren und Sensorsysteme sind dabei unerlĂ€sslich. Die Metrologie befasst sich mit der Festlegung international anerkannter Maßeinheiten und Standards. Um Messergebnisse international zu vergleichen, stellt der Guide to the Expression of Uncertainty in Measurement (GUM) die Basis zur Bewertung von Messunsicherheit bereit. Gleichzeitig liefert die Messunsicherheit auch Informationen zur DatenqualitĂ€t, welche wiederum wichtig ist, wenn maschinelles Lernen in der digitalisierten Fabrik zur Anwendung kommt. Bisher wurde die Messunsicherheit im Bereich des maschinellen Lernens jedoch meist vernachlĂ€ssigt oder nur mittels Kreuzvalidierung geschĂ€tzt. Ziel dieser Dissertation ist es daher, Messunsicherheit basierend auf dem GUM und maschinelles Lernen zu vereinen. Zur DurchfĂŒhrung des maschinellen Lernens wird eine Datenpipeline vorgestellt, welche Rohdaten verschiedener Messsysteme fusioniert und Messunsicherheiten aus dynamischen Kalibrierinformationen bestimmt. Des Weiteren wird eine bereits publizierte automatisierte Toolbox fĂŒr maschinelles Lernen um Unsicherheitsfortpflanzungen nach dem GUM erweitert. Unter Verwendung dieser Toolbox werden der Einfluss der Messunsicherheit auf die Ergebnisse des maschinellen Lernens untersucht und AnsĂ€tze zur Verbesserung dieser Ergebnisse aufgezeigt

    A novel method of detecting galling and other forms of catastrophic adhesion in tribotests

    Get PDF
    Tribotests are used to evaluate the performance of lubricants and surface treatments intended for use in industrial applications. They are invaluable tools for lubricant development since many lubricant parameters can be screened in the laboratory with only the best going on to production trials. Friction force or coefficient of friction is often used as an indicator of lubricant performance with sudden increases in friction coefficient indicating failure through catastrophic adhesion. Under some conditions the identification of the point of failure can be a subjective process. This raises the question: Are there better methods for identifying lubricant failure due to catastrophic adhesion that would be beneficial in the evaluation of lubricants? The hypothesis of this research states that a combination of data from various sensors measuring the real-time response of a tribotest provides better detection of adhesive wear than the coefficient of friction alone. In this investigation an industrial tribotester (the Twist Compression Test) was instrumented with a variety of sensors to record: vibrations along two axes, acoustic emissions, electrical resistance, as well as transmitted torsional force and normal force. The signals were collected at 10 kHz for the duration of the tests. In the main study D2 tool steel annular specimens were tested on coldrolled sheet steel at 100 MPa contact pressure in flat sliding at 0.01 m/s. The effects of lubricant viscosity and lubricant chemistry on the adhesive properties of the surface were examined. Tests results were analyzed to establish the apparent point of failure based on the traditional friction criteria. Extended tests of one condition were run to various points up to and after this point and the results analyzed to correlate sensor data with the test specimen surfaces. Sensor data features were used to identify adhesive wear as a continuous process. In particular an increase “friction amplitude” related to a form of stick-slip was used as a key indicator of the occurrence of galling. The findings of this research forms a knowledge base for the development of a decision support system (DSS) to identify lubricant failure based on industrial application requirements.Doctoral These

    Real-time implementation of a sensor validation scheme for a heavy-duty diesel engine

    Get PDF
    With ultra-low exhaust emissions standards, heavy-duty diesel engines (HDDEs) are dependent upon a myriad of sensors to optimize power output and exhaust emissions. Apart from acquiring and processing sensor signals, engine control modules should also have capabilities to report and compensate for sensors that have failed. The global objective of this research was to develop strategies to enable HDDEs to maintain nominal in-use performance during periods of sensor failures. Specifically, the work explored the creation of a sensor validation scheme to detect, isolate, and accommodate sensor failures in HDDEs. The scheme not only offers onboard diagnostic (OBD) capabilities, but also control of engine performance in the event of sensor failures. The scheme, known as Sensor Failure Detection Isolation and Accommodation (SFDIA), depends on mathematical models for its functionality. Neural approximators served as the modeling tool featuring online adaptive capabilities. The significance of the SFDIA is that it can enhance an engine management system (EMS) capability to control performance under any operating conditions when sensors fail. The SFDIA scheme updates models during the lifetime of an engine under real world, in-use conditions. The central hypothesis for the work was that the SFDIA scheme would allow continuous normal operation of HDDEs under conditions of sensor failures. The SFDIA was tested using the boost pressure, coolant temperature, and fuel pressure sensors to evaluate its performance. The test engine was a 2004 MackRTM MP7-355E (11 L, 355 hp). Experimental work was conducted at the Engine and Emissions Research Laboratory (EERL) at West Virginia University (WVU). Failure modes modeled were abrupt, long-term drift and intermittent failures. During the accommodation phase, the SFDIA restored engine power up to 0.64% to nominal. In addition, oxides of nitrogen (NOx) emissions were maintained at up to 1.41% to nominal
    • 

    corecore