101 research outputs found
Application of deep learning methods in materials microscopy for the quality assessment of lithium-ion batteries and sintered NdFeB magnets
Die Qualitätskontrolle konzentriert sich auf die Erkennung von Produktfehlern und die Überwachung von Aktivitäten, um zu überprüfen, ob die Produkte den gewünschten Qualitätsstandard erfüllen. Viele Ansätze für die Qualitätskontrolle verwenden spezialisierte Bildverarbeitungssoftware, die auf manuell entwickelten Merkmalen basiert, die von Fachleuten entwickelt wurden, um Objekte zu erkennen und Bilder zu analysieren. Diese Modelle sind jedoch mühsam, kostspielig in der Entwicklung und schwer zu pflegen, während die erstellte Lösung oft spröde ist und für leicht unterschiedliche Anwendungsfälle erhebliche Anpassungen erfordert. Aus diesen Gründen wird die Qualitätskontrolle in der Industrie immer noch häufig manuell durchgeführt, was zeitaufwändig und fehleranfällig ist. Daher schlagen wir einen allgemeineren datengesteuerten Ansatz vor, der auf den jüngsten Fortschritten in der Computer-Vision-Technologie basiert und Faltungsneuronale Netze verwendet, um repräsentative Merkmale direkt aus den Daten zu lernen. Während herkömmliche Methoden handgefertigte Merkmale verwenden, um einzelne Objekte zu erkennen, lernen Deep-Learning-Ansätze verallgemeinerbare Merkmale direkt aus den Trainingsproben, um verschiedene Objekte zu erkennen.
In dieser Dissertation werden Modelle und Techniken für die automatisierte Erkennung von Defekten in lichtmikroskopischen Bildern von materialografisch präparierten Schnitten entwickelt. Wir entwickeln Modelle zur Defekterkennung, die sich grob in überwachte und unüberwachte Deep-Learning-Techniken einteilen lassen. Insbesondere werden verschiedene überwachte Deep-Learning-Modelle zur Erkennung von Defekten in der Mikrostruktur von Lithium-Ionen-Batterien entwickelt, von binären Klassifizierungsmodellen, die auf einem Sliding-Window-Ansatz mit begrenzten Trainingsdaten basieren, bis hin zu komplexen Defekterkennungs- und Lokalisierungsmodellen, die auf ein- und zweistufigen Detektoren basieren. Unser endgültiges Modell kann mehrere Klassen von Defekten in großen Mikroskopiebildern mit hoher Genauigkeit und nahezu in Echtzeit erkennen und lokalisieren.
Das erfolgreiche Trainieren von überwachten Deep-Learning-Modellen erfordert jedoch in der Regel eine ausreichend große Menge an markierten Trainingsbeispielen, die oft nicht ohne weiteres verfügbar sind und deren Beschaffung sehr kostspielig sein kann. Daher schlagen wir zwei Ansätze vor, die auf unbeaufsichtigtem Deep Learning zur Erkennung von Anomalien in der Mikrostruktur von gesinterten NdFeB-Magneten basieren, ohne dass markierte Trainingsdaten benötigt werden. Die Modelle sind in der Lage, Defekte zu erkennen, indem sie aus den Trainingsdaten indikative Merkmale von nur "normalen" Mikrostrukturmustern lernen. Wir zeigen experimentelle Ergebnisse der vorgeschlagenen Fehlererkennungssysteme, indem wir eine Qualitätsbewertung an kommerziellen Proben von Lithium-Ionen-Batterien und gesinterten NdFeB-Magneten durchführen
Condition Monitoring Methods for Large, Low-speed Bearings
In all industrial production plants, well-functioning machines and systems are required for sustained and safe operation. However, asset performance degrades over time and may lead to reduced effiency, poor product quality, secondary damage to other assets or even complete failure and unplanned downtime of critical systems. Besides the potential safety hazards from machine failure, the economic consequences are large, particularly in offshore applications where repairs are difficult. This thesis focuses on large, low-speed rolling element bearings, concretized by the main swivel bearing of an offshore drilling machine. Surveys have shown that bearing failure in drilling machines is a major cause of rig downtime. Bearings have a finite lifetime, which can be estimated using formulas supplied by the bearing manufacturer. Premature failure may still occur as a result of irregularities in operating conditions and use, lubrication, mounting, contamination, or external environmental factors. On the contrary, a bearing may also exceed the expected lifetime. Compared to smaller bearings, historical failure data from large, low-speed machinery is rare. Due to the high cost of maintenance and repairs, the preferred maintenance arrangement is often condition based. Vibration measurements with accelerometers is the most common data acquisition technique. However, vibration based condition monitoring of large, low-speed bearings is challenging, due to non-stationary operating conditions, low kinetic energy and increased distance from fault to transducer. On the sensor side, this project has also investigated the usage of acoustic emission sensors for condition monitoring purposes.
Roller end damage is identified as a failure mode of interest in tapered axial bearings. Early stage abrasive wear has been observed on bearings in drilling machines. The failure mode is currently only detectable upon visual inspection and potentially through wear debris in the bearing lubricant. In this thesis, multiple machine learning algorithms are developed and applied to handle the challenges of fault detection in large, low-speed bearings with little or no historical data and unknown fault signatures. The feasibility of transfer learning is demonstrated, as an approach to speed up implementation of automated fault detection systems when historical failure data is available. Variational autoencoders are proposed as a method for unsupervised dimensionality reduction and feature extraction, being useful for obtaining a health indicator with a statistical anomaly detection threshold. Data is collected from numerous experiments throughout the project. Most notably, a test was performed on a real offshore drilling machine with roller end wear in the bearing. To replicate this failure mode and aid development of condition monitoring methods, an axial bearing test rig has been designed and built as a part of the project. An overview of all experiments, methods and results are given in the thesis, with details covered in the appended papers.publishedVersio
Sensor Signal and Information Processing II
In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing
Building transformative framework for isolation and mitigation of quality defects in multi-station assembly systems using deep learning
The manufacturing industry is undergoing significant transformation towards electrification (e-mobility). This transformation has intensified critical development of new lightweight materials, structures and assembly processes supporting high volume and high variety production of Battery Electric Vehicles (BEVs). As new materials and processes get developed it is crucial to address quality defects detection, prediction, and prevention especially given that e-mobility products interlink quality and safety, for example, assembly of ‘live’ battery systems. These requirements necessitate the development of methodologies that ensure quality requirements of products are satisfied from Job 1. This means ensuring high right-first-time ratio during process design by reducing manual and ineffective trial-and-error process adjustments; and, then continuing this by maintaining near zero-defect manufacturing during production by reducing Mean-Time-to-Detection and Mean-Time-to-Resolution for critical quality defects. Current technologies for isolating and mitigating quality issues provide limited performance within complex manufacturing systems due to (i) limited modelling abilities and lack capabilities to leverage point cloud quality monitoring data provided by recent measurement technologies such as 3D scanners to isolate defects; (ii) extensive dependence on manual expertise to mitigate the isolated defects; and, (iii) lack of integration between data-driven and physics-based models resulting in limited industrial applicability, scalability and interpretability capabilities, hence constitute a significant barrier towards ensuring quality requirements throughout the product lifecycle.
The study develops a transformative framework that goes beyond improving the accuracy and performance of current approaches and overcomes fundamental barriers for isolation and mitigation of product shape error quality defects in multi-station assembly systems (MAS). The proposed framework is based on three methodologies which explore MAS: (i) response to quality defects by isolating process parameters (root causes (RCs)) causing unaccepted shape error defects; (ii) correction of the isolated RCs by determining corrective actions (CA) policy to mitigate unaccepted shape error defects; and, (iii) training, scalability and interpretability of (i) and (ii) by establishing closed-loop in-process (CLIP) capability that integrates in-line point cloud data, deep learning approaches of (i) and (ii) and physics-based models to provide comprehensive data-driven defect identification and RC isolation (causality analysis). The developed methodologies include:
(i) Object Shape Error Response (OSER) to isolate RCs within single- and multi-station assembly systems (OSER-MAS) by developing Bayesian 3D-convolutional neural network architectures that process point cloud data and are trained using physics-based models and have capabilities to relate complex product shape error patterns to RCs. It quantifies uncertainties and is applicable during the design phase when no quality monitoring data is available.
(ii) Object Shape Error Correction (OSEC) to generate CAs that mitigate RCs and simultaneously account for cost and quality key performance indicators (KPIs), MAS reconfigurability, and stochasticity by developing a deep reinforcement learning framework that estimates effective and feasible CAs without manual expertise.
(iii) Closed-Loop In-Process (CLIP) to enable industrial adoption of approaches (i) & (ii) by firstly enhancing the scalability by using (a) closed-loop training, and (b) continual/transfer learning. This is important as training deep learning models for a MAS is time-intensive and requires large amounts of labelled data; secondly providing interpretability and transparency for the estimated RCs that drive costly CAs using (c) 3D gradient-based class activation maps.
The methods are implemented as independent kernels and then integrated within a transformative framework which is further verified, validated, and benchmarked using industrial-scale automotive sheet metal assembly case studies such as car door and cross-member. They demonstrate 29% better performance for RC isolation and 40% greater effectiveness for CAs than current statistical and engineering-based approaches
Recommended from our members
Improving System Reliability for Cyber-Physical Systems
Cyber-physical systems (CPS) are systems featuring a tight combination of, and coordination between, the system's computational and physical elements. Cyber-physical systems include systems ranging from critical infrastructure such as a power grid and transportation system to health and biomedical devices. System reliability, i.e., the ability of a system to perform its intended function under a given set of environmental and operational conditions for a given period of time, is a fundamental requirement of cyber-physical systems. An unreliable system often leads to disruption of service, financial cost and even loss of human life. An important and prevalent type of cyber-physical system meets the following criteria: processing large amounts of data; employing software as a system component; running online continuously; having operator-in-the-loop because of human judgment and an accountability requirement for safety critical systems. This thesis aims to improve system reliability for this type of cyber-physical system. To improve system reliability for this type of cyber-physical system, I present a system evaluation approach entitled automated online evaluation (AOE), which is a data-centric runtime monitoring and reliability evaluation approach that works in parallel with the cyber-physical system to conduct automated evaluation along the workflow of the system continuously using computational intelligence and self-tuning techniques and provide operator-in-the-loop feedback on reliability improvement. For example, abnormal input and output data at or between the multiple stages of the system can be detected and flagged through data quality analysis. As a result, alerts can be sent to the operator-in-the-loop. The operator can then take actions and make changes to the system based on the alerts in order to achieve minimal system downtime and increased system reliability. One technique used by the approach is data quality analysis using computational intelligence, which applies computational intelligence in evaluating data quality in an automated and efficient way in order to make sure the running system perform reliably as expected. Another technique used by the approach is self-tuning which automatically self-manages and self-configures the evaluation system to ensure that it adapts itself based on the changes in the system and feedback from the operator. To implement the proposed approach, I further present a system architecture called autonomic reliability improvement system (ARIS). This thesis investigates three hypotheses. First, I claim that the automated online evaluation empowered by data quality analysis using computational intelligence can effectively improve system reliability for cyber-physical systems in the domain of interest as indicated above. In order to prove this hypothesis, a prototype system needs to be developed and deployed in various cyber-physical systems while certain reliability metrics are required to measure the system reliability improvement quantitatively. Second, I claim that the self-tuning can effectively self-manage and self-configure the evaluation system based on the changes in the system and feedback from the operator-in-the-loop to improve system reliability. Third, I claim that the approach is efficient. It should not have a large impact on the overall system performance and introduce only minimal extra overhead to the cyberphysical system. Some performance metrics should be used to measure the efficiency and added overhead quantitatively. Additionally, in order to conduct efficient and cost-effective automated online evaluation for data-intensive CPS, which requires large volumes of data and devotes much of its processing time to I/O and data manipulation, this thesis presents COBRA, a cloud-based reliability assurance framework. COBRA provides automated multi-stage runtime reliability evaluation along the CPS workflow using data relocation services, a cloud data store, data quality analysis and process scheduling with self-tuning to achieve scalability, elasticity and efficiency. Finally, in order to provide a generic way to compare and benchmark system reliability for CPS and to extend the approach described above, this thesis presents FARE, a reliability benchmark framework that employs a CPS reliability model, a set of methods and metrics on evaluation environment selection, failure analysis, and reliability estimation. The main contributions of this thesis include validation of the above hypotheses and empirical studies of ARIS automated online evaluation system, COBRA cloud-based reliability assurance framework for data-intensive CPS, and FARE framework for benchmarking reliability of cyber-physical systems. This work has advanced the state of the art in the CPS reliability research, expanded the body of knowledge in this field, and provided some useful studies for further research
Maintenance Management of Wind Turbines
“Maintenance Management of Wind Turbines” considers the main concepts and the state-of-the-art, as well as advances and case studies on this topic. Maintenance is a critical variable in industry in order to reach competitiveness. It is the most important variable, together with operations, in the wind energy industry. Therefore, the correct management of corrective, predictive and preventive politics in any wind turbine is required. The content also considers original research works that focus on content that is complementary to other sub-disciplines, such as economics, finance, marketing, decision and risk analysis, engineering, etc., in the maintenance management of wind turbines. This book focuses on real case studies. These case studies concern topics such as failure detection and diagnosis, fault trees and subdisciplines (e.g., FMECA, FMEA, etc.) Most of them link these topics with financial, schedule, resources, downtimes, etc., in order to increase productivity, profitability, maintainability, reliability, safety, availability, and reduce costs and downtime, etc., in a wind turbine. Advances in mathematics, models, computational techniques, dynamic analysis, etc., are employed in analytics in maintenance management in this book. Finally, the book considers computational techniques, dynamic analysis, probabilistic methods, and mathematical optimization techniques that are expertly blended to support the analysis of multi-criteria decision-making problems with defined constraints and requirements
Development of maintenance framework for modern manufacturing systems
Modern manufacturing organizations are designing, building and operating large, complex and often ‘one of a kind’ assets, which incorporate the integration of various systems under modern control systems. Due to such complexity, machines failures became more difficult to interpret and rectify and the existing maintenance strategies became obsolete without development and enhancement. As a result, the need for more advanced strategies to ensure effective maintenance applications that ensures high operation efficiency arise. The current research aims to investigate the existing maintenance strategies, the levels of machines complexity and automation within manufacturing companies from different sectors and sizes including, oil and gas, food and beverages, automotive, aerospace, and Original Equipment Manufacturer. Results analysis supports in the development of a modern maintenance framework that overcome the highlighted results and suits modern manufacturing assets using systematic approaches and utilisation of pillars from Total productive maintenance (TPM, Reliability Centred Maintenance (RCM) and Industry 4.0
Sensors Fault Diagnosis Trends and Applications
Fault diagnosis has always been a concern for industry. In general, diagnosis in complex systems requires the acquisition of information from sensors and the processing and extracting of required features for the classification or identification of faults. Therefore, fault diagnosis of sensors is clearly important as faulty information from a sensor may lead to misleading conclusions about the whole system. As engineering systems grow in size and complexity, it becomes more and more important to diagnose faulty behavior before it can lead to total failure. In the light of above issues, this book is dedicated to trends and applications in modern-sensor fault diagnosis
Unsupervised Methods for Condition-Based Maintenance in Non-Stationary Operating Conditions
Maintenance and operation of modern dynamic engineering systems requires the use of robust maintenance strategies that are reliable under uncertainty. One such strategy is condition-based maintenance (CBM), in which maintenance actions are determined based on the current health of the system. The CBM framework integrates fault detection and forecasting in the form of degradation modeling to provide real-time reliability, as well as valuable insight towards the future health of the system. Coupled with a modern information platform such as Internet-of-Things (IoT), CBM can deliver these critical functionalities at scale.
The increasingly complex design and operation of engineering systems has introduced novel problems to CBM. Characteristics of these systems - such as the unavailability of historical data, or highly dynamic operating behaviour - has rendered many existing solutions infeasible. These problems have motivated the development of new and self-sufficient - or in other words - unsupervised CBM solutions. The issue, however, is that many of the necessary methods required by such frameworks have yet to be proposed within the literature. Key gaps pertaining to the lack of suitable unsupervised approaches for the pre-processing of non-stationary vibration signals, parameter estimation for fault detection, and degradation threshold estimation, need to be addressed in order to achieve an effective implementation.
The main objective of this thesis is to propose set of three novel approaches to address each of the aforementioned knowledge gaps. A non-parametric pre-processing and spectral analysis approach, termed spectral mean shift clustering (S-MSC) - which applies mean shift clustering (MSC) to the short time Fourier transform (STFT) power spectrum
for simultaneous de-noising and extraction of time-varying harmonic components - is proposed for the autonomous analysis of non-stationary vibration signals. A second pre-processing approach, termed Gaussian mixture model operating state decomposition (GMM-OSD) - which uses GMMs to cluster multi-modal vibration signals by their respective, unknown operating states - is proposed to address multi-modal non-stationarity. Applied in conjunction with S-MSC, these two approaches form a robust and unsupervised pre-processing framework tailored to the types of signals found in modern engineering systems. The final approach proposed in this thesis is a degradation detection and fault prediction framework, termed the Bayesian one class support vector machine (B-OCSVM), which tackles the key knowledge gaps pertaining to unsupervised parameter and degradation threshold estimation by re-framing the traditional fault detection and degradation modeling problem as a degradation detection and fault prediction problem.
Validation of the three aforementioned approaches is performed across a wide range of machinery vibration data sets and applications, including data obtained from two full-scale field pilots located at Toronto Pearson International Airport. The first of which is located on the gearbox of the LINK Automated People Mover (APM) train at Toronto Pearson International Airport; and, the second which is located on a subset of passenger boarding tunnel pre-conditioned air units (PCA) in Terminal 1 of Pearson airport. Results from validation found that the proposed pre-processing approaches and combined pre-processing framework provides a robust and computationally efficient and robust methodology for the analysis of non-stationary vibration signals in unsupervised CBM. Validation of the B-OCSVM framework showed that the proposed parameter estimation approaches enables the earlier detection of the degradation process compared to existing approaches, and the proposed degradation threshold provides a reasonable estimate of the fault manifestation point. Holistically, the approaches proposed in thesis provide a crucial step forward towards the effective implementation of unsupervised CBM in complex, modern engineering systems
Situation Awareness for Smart Distribution Systems
In recent years, the global climate has become variable due to intensification of the greenhouse effect, and natural disasters are frequently occurring, which poses challenges to the situation awareness of intelligent distribution networks. Aside from the continuous grid connection of distributed generation, energy storage and new energy generation not only reduces the power supply pressure of distribution network to a certain extent but also brings new consumption pressure and load impact. Situation awareness is a technology based on the overall dynamic insight of environment and covering perception, understanding, and prediction. Such means have been widely used in security, intelligence, justice, intelligent transportation, and other fields and gradually become the research direction of digitization and informatization in the future. We hope this Special Issue represents a useful contribution. We present 10 interesting papers that cover a wide range of topics all focused on problems and solutions related to situation awareness for smart distribution systems. We sincerely hope the papers included in this Special Issue will inspire more researchers to further develop situation awareness for smart distribution systems. We strongly believe that there is a need for more work to be carried out, and we hope this issue provides a useful open-access platform for the dissemination of new ideas
- …