296 research outputs found

    Real-Time Fault Diagnosis of Permanent Magnet Synchronous Motor and Drive System

    Get PDF
    Permanent Magnet Synchronous Motors (PMSMs) have gained massive popularity in industrial applications such as electric vehicles, robotic systems, and offshore industries due to their merits of efficiency, power density, and controllability. PMSMs working in such applications are constantly exposed to electrical, thermal, and mechanical stresses, resulting in different faults such as electrical, mechanical, and magnetic faults. These faults may lead to efficiency reduction, excessive heat, and even catastrophic system breakdown if not diagnosed in time. Therefore, developing methods for real-time condition monitoring and detection of faults at early stages can substantially lower maintenance costs, downtime of the system, and productivity loss. In this dissertation, condition monitoring and detection of the three most common faults in PMSMs and drive systems, namely inter-turn short circuit, demagnetization, and sensor faults are studied. First, modeling and detection of inter-turn short circuit fault is investigated by proposing one FEM-based model, and one analytical model. In these two models, efforts are made to extract either fault indicators or adjustments for being used in combination with more complex detection methods. Subsequently, a systematic fault diagnosis of PMSM and drive system containing multiple faults based on structural analysis is presented. After implementing structural analysis and obtaining the redundant part of the PMSM and drive system, several sequential residuals are designed and implemented based on the fault terms that appear in each of the redundant sets to detect and isolate the studied faults which are applied at different time intervals. Finally, real-time detection of faults in PMSMs and drive systems by using a powerful statistical signal-processing detector such as generalized likelihood ratio test is investigated. By using generalized likelihood ratio test, a threshold was obtained based on choosing the probability of a false alarm and the probability of detection for each detector based on which decision was made to indicate the presence of the studied faults. To improve the detection and recovery delay time, a recursive cumulative GLRT with an adaptive threshold algorithm is implemented. As a result, a more processed fault indicator is achieved by this recursive algorithm that is compared to an arbitrary threshold, and a decision is made in real-time performance. The experimental results show that the statistical detector is able to efficiently detect all the unexpected faults in the presence of unknown noise and without experiencing any false alarm, proving the effectiveness of this diagnostic approach.publishedVersio

    A methodology for data-driven adjustment of variation propagation models in multistage manufacturing processes

    Get PDF
    In the current paradigm of Zero Defect Manufacturing, it is essential to obtain mathematical models that express the propagation of manufacturing deviations along Multistage Manufacturing Processes (MMPs). Linear physical-based models such as the Stream of Variation (SoV) model are commonly used, but its accuracy may be limited when applied to MMPs with a large amount of stages, mainly because of the modeling errors at each stage that are accumulated downstream. In this paper we propose a methodology to calibrate the SoV model using data from the inspection stations and prior engineering-based knowledge. The data used for calibration does not contain information about the sources of variation, and they must be estimated as part of the model adjustment procedure. The proposed methodology consists of a recursive algorithm that minimizes the difference between the sample covariance of the measured Key Product Characteristic (KPC) deviations and its estimation, which is a function of a variation propagation matrix and the covariance of the deviation of the variation sources. To solve the problem with standard convex optimization tools, Schur complements and Taylor series linearizations are applied. The output of the algorithm is an adjusted model, which consists of a variation propagation matrix and an estimation of the aforementioned variation source covariance. In order to validate the performance of the algorithm, a simulated case study is analyzed. The results, based on Monte Carlo simulations, show that the estimation errors of the KPC deviation covariances are proportional to the measurement noise variance and inversely proportional to the number of processed parts that have been used to train the algorithm, similarly to other process estimators in the literature.Funding for open access charge: CRUE-Universitat Jaume

    Adonis: Practical and Efficient Control Flow Recovery through OS-Level Traces

    Get PDF
    Control flow recovery is critical to promise the software quality, especially for large-scale software in production environment. However, the efficiency of most current control flow recovery techniques is compromised due to their runtime overheads along with deployment and development costs. To tackle this problem, we propose a novel solution, Adonis, which harnesses OS-level traces, such as dynamic library calls and system call traces, to efficiently and safely recover control flows in practice. Adonis operates in two steps: it first identifies the call-sites of trace entries, then it executes a pair-wise symbolic execution to recover valid execution paths. This technique has several advantages. First, Adonis does not require the insertion of any probes into existing applications, thereby minimizing runtime cost. Second, given that OS-level traces are hardware-independent, Adonis can be implemented across various hardware configurations without the need for hardware-specific engineering efforts, thus reducing deployment cost. Third, as Adonis is fully automated and does not depend on manually created logs, it circumvents additional development cost. We conducted an evaluation of Adonis on representative desktop applications and real-world IoT applications. Adonis can faithfully recover the control flow with 86.8% recall and 81.7% precision. Compared to the state-of-the-art log-based approach, Adonis can not only cover all the execution paths recovered, but also recover 74.9% of statements that cannot be covered. In addition, the runtime cost of Adonis is 18.3× lower than the instrument-based approach; the analysis time and storage cost (indicative of the deployment cost) of Adonis is 50× smaller and 443× smaller than the hardware-based approach, respectively. To facilitate future replication and extension of this work, we have made the code and data publicly available

    Machine learning based anomaly detection for industry 4.0 systems.

    Get PDF
    223 p.This thesis studies anomaly detection in industrial systems using technologies from the Fourth Industrial Revolution (4IR), such as the Internet of Things, Artificial Intelligence, 3D Printing, and Augmented Reality. The goal is to provide tools that can be used in real-world scenarios to detect system anomalies, intending to improve production and maintenance processes. The thesis investigates the applicability and implementation of 4IR technology architectures, AI-driven machine learning systems, and advanced visualization tools to support decision-making based on the detection of anomalies. The work covers a range of topics, including the conception of a 4IR system based on a generic architecture, the design of a data acquisition system for analysis and modelling, the creation of ensemble supervised and semi-supervised models for anomaly detection, the detection of anomalies through frequency analysis, and the visualization of associated data using Visual Analytics. The results show that the proposed methodology for integrating anomaly detection systems in new or existing industries is valid and that combining 4IR architectures, ensemble machine learning models, and Visual Analytics tools significantly enhances theanomaly detection processes for industrial systems. Furthermore, the thesis presents a guiding framework for data engineers and end-users

    Understanding the role of sensor optimisation in complex systems

    Get PDF
    Complex systems involve monitoring, assessing, and predicting the health of various systems within an integrated vehicle health management (IVHM) system or a larger system. Health management applications rely on sensors that generate useful information about the health condition of the assets; thus, optimising the sensor network quality while considering specific constraints is the first step in assessing the condition of assets. The optimisation problem in sensor networks involves considering trade-offs between different performance metrics. This review paper provides a comprehensive guideline for practitioners in the field of sensor optimisation for complex systems. It introduces versatile multi-perspective cost functions for different aspects of sensor optimisation, including selection, placement, data processing and operation. A taxonomy and concept map of the field are defined as valuable navigation tools in this vast field. Optimisation techniques and quantification approaches of the cost functions are discussed, emphasising their adaptability to tailor to specific application requirements. As a pioneering contribution, all the relevant literature is gathered and classified here to further improve the understanding of optimal sensor networks from an information-gain perspective

    A framework for fault detection and diagnostics of articulated collaborative robots based on hybrid series modelling of Artificial Intelligence algorithms

    Get PDF
    Smart factories build on cyber-physical systems as one of the most promising technological concepts. Within smart factories, condition-based and predictive maintenance are key solutions to improve competitiveness by reducing downtimes and increasing the overall equipment effectiveness. Besides, the growing interest towards operation flexibility has pushed companies to introduce novel solutions on the shop floor, leading to install cobots for advanced human-machine collaboration. Despite their reliability, also cobots are subjected to degradation and functional failures may influence their operation, leading to anomalous trajectories. In this context, the literature shows gaps in what concerns a systematic adoption of condition-based and predictive maintenance to monitor and predict the health state of cobots to finally assure their expected performance. This work proposes an approach that leverages on a framework for fault detection and diagnostics of cobots inspired by the Prognostics and Health Management process as a guideline. The goal is to habilitate first-level maintenance, which aims at informing the operator about anomalous trajectories. The framework is enabled by a modular structure consisting of hybrid series modelling of unsupervised Artificial Intelligence algorithms, and it is assessed by inducing three functional failures in a 7-axis collaborative robot used for pick and place operations. The framework demonstrates the capability to accommodate and handle different trajectories while notifying the unhealthy state of cobots. Thanks to its structure, the framework is open to testing and comparing more algorithms in future research to identify the best-in-class in each of the proposed steps given the operational context on the shop floor

    Lessons learned: Symbiotic autonomous robot ecosystem for nuclear environments

    Get PDF
    Nuclear facilities have a regulatory requirement to measure radiation levels within Post Operational Cleanout (POCO) around nuclear facilities each year, resulting in a trend towards robotic deployments to gain an improved understanding during nuclear decommissioning phases. The UK Nuclear Decommissioning Authority supports the view that human-in-the-loop robotic deployments are a solution to improve procedures and reduce risks within radiation haracterisation of nuclear sites. We present a novel implementation of a Cyber-Physical System (CPS) deployed in an analogue nuclear environment, comprised of a multi-robot team coordinated by a human-in-the-loop operator through a digital twin interface. The development of the CPS created efficient partnerships across systems including robots, digital systems and human. This was presented as a multi-staged mission within an inspection scenario for the heterogeneous Symbiotic Multi-Robot Fleet (SMuRF). Symbiotic interactions were achieved across the SMuRF where robots utilised automated collaborative governance to work together where a single robot would face challenges in full characterisation of radiation. Key contributions include the demonstration of symbiotic autonomy and query-based learning of an autonomous mission supporting scalable autonomy and autonomy as a service. The coordination of the CPS was a success and displayed further challenges and improvements related to future multi-robot fleets

    Improving data preparation for the application of process mining

    Get PDF
    Immersed in what is already known as the fourth industrial revolution, automation and data exchange are taking on a particularly relevant role in complex environments, such as industrial manufacturing environments or logistics. This digitisation and transition to the Industry 4.0 paradigm is causing experts to start analysing business processes from other perspectives. Consequently, where management and business intelligence used to dominate, process mining appears as a link, trying to build a bridge between both disciplines to unite and improve them. This new perspective on process analysis helps to improve strategic decision making and competitive capabilities. Process mining brings together data and process perspectives in a single discipline that covers the entire spectrum of process management. Through process mining, and based on observations of their actual operations, organisations can understand the state of their operations, detect deviations, and improve their performance based on what they observe. In this way, process mining is an ally, occupying a large part of current academic and industrial research. However, although this discipline is receiving more and more attention, it presents severe application problems when it is implemented in real environments. The variety of input data in terms of form, content, semantics, and levels of abstraction makes the execution of process mining tasks in industry an iterative, tedious, and manual process, requiring multidisciplinary experts with extensive knowledge of the domain, process management, and data processing. Currently, although there are numerous academic proposals, there are no industrial solutions capable of automating these tasks. For this reason, in this thesis by compendium we address the problem of improving business processes in complex environments thanks to the study of the state-of-the-art and a set of proposals that improve relevant aspects in the life cycle of processes, from the creation of logs, log preparation, process quality assessment, and improvement of business processes. Firstly, for this thesis, a systematic study of the literature was carried out in order to gain an in-depth knowledge of the state-of-the-art in this field, as well as the different challenges faced by this discipline. This in-depth analysis has allowed us to detect a number of challenges that have not been addressed or received insufficient attention, of which three have been selected and presented as the objectives of this thesis. The first challenge is related to the assessment of the quality of input data, known as event logs, since the requeriment of the application of techniques for improving the event log must be based on the level of quality of the initial data, which is why this thesis presents a methodology and a set of metrics that support the expert in selecting which technique to apply to the data according to the quality estimation at each moment, another challenge obtained as a result of our analysis of the literature. Likewise, the use of a set of metrics to evaluate the quality of the resulting process models is also proposed, with the aim of assessing whether improvement in the quality of the input data has a direct impact on the final results. The second challenge identified is the need to improve the input data used in the analysis of business processes. As in any data-driven discipline, the quality of the results strongly depends on the quality of the input data, so the second challenge to be addressed is the improvement of the preparation of event logs. The contribution in this area is the application of natural language processing techniques to relabel activities from textual descriptions of process activities, as well as the application of clustering techniques to help simplify the results, generating more understandable models from a human point of view. Finally, the third challenge detected is related to the process optimisation, so we contribute with an approach for the optimisation of resources associated with business processes, which, through the inclusion of decision-making in the creation of flexible processes, enables significant cost reductions. Furthermore, all the proposals made in this thesis are validated and designed in collaboration with experts from different fields of industry and have been evaluated through real case studies in public and private projects in collaboration with the aeronautical industry and the logistics sector

    Industry 4.0: product digital twins for remanufacturing decision-making

    Get PDF
    Currently there is a desire to reduce natural resource consumption and expand circular business principles whilst Industry 4.0 (I4.0) is regarded as the evolutionary and potentially disruptive movement of technology, automation, digitalisation, and data manipulation into the industrial sector. The remanufacturing industry is recognised as being vital to the circular economy (CE) as it extends the in-use life of products, but its synergy with I4.0 has had little attention thus far. This thesis documents the first investigating into I4.0 in remanufacturing for a CE contributing a design and demonstration of a model that optimises remanufacturing planning using data from different instances in a product’s life cycle. The initial aim of this work was to identify the I4.0 technology that would enhance the stability in remanufacturing with a view to reducing resource consumption. As the project progressed it narrowed to focus on the development of a product digital twin (DT) model to support data-driven decision making for operations planning. The model’s architecture was derived using a bottom-up approach where requirements were extracted from the identified complications in production planning and control that differentiate remanufacturing from manufacturing. Simultaneously, the benefits of enabling visibility of an asset’s through-life health were obtained using a DT as the modus operandi. A product simulator and DT prototype was designed to use Internet of Things (IoT) components, a neural network for remaining life estimations and a search algorithm for operational planning optimisation. The DT was iteratively developed using case studies to validate and examine the real opportunities that exist in deploying a business model that harnesses, and commodifies, early life product data for end-of-life processing optimisation. Findings suggest that using intelligent programming networks and algorithms, a DT can enhance decision-making if it has visibility of the product and access to reliable remanufacturing process information, whilst existing IoT components provide rudimentary “smart” capabilities, but their integration is complex, and the durability of the systems over extended product life cycles needs to be further explored

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems
    corecore