148 research outputs found

    Machine learning based anomaly detection for industry 4.0 systems.

    Get PDF
    223 p.This thesis studies anomaly detection in industrial systems using technologies from the Fourth Industrial Revolution (4IR), such as the Internet of Things, Artificial Intelligence, 3D Printing, and Augmented Reality. The goal is to provide tools that can be used in real-world scenarios to detect system anomalies, intending to improve production and maintenance processes. The thesis investigates the applicability and implementation of 4IR technology architectures, AI-driven machine learning systems, and advanced visualization tools to support decision-making based on the detection of anomalies. The work covers a range of topics, including the conception of a 4IR system based on a generic architecture, the design of a data acquisition system for analysis and modelling, the creation of ensemble supervised and semi-supervised models for anomaly detection, the detection of anomalies through frequency analysis, and the visualization of associated data using Visual Analytics. The results show that the proposed methodology for integrating anomaly detection systems in new or existing industries is valid and that combining 4IR architectures, ensemble machine learning models, and Visual Analytics tools significantly enhances theanomaly detection processes for industrial systems. Furthermore, the thesis presents a guiding framework for data engineers and end-users

    Sequence-Oriented Diagnosis of Discrete-Event Systems

    Get PDF
    Model-based diagnosis has always been conceived as set-oriented, meaning that a candidate is a set of faults, or faulty components, that explains a collection of observations. This perspective applies equally to both static and dynamical systems. Diagnosis of discrete-event systems (DESs) is no exception: a candidate is traditionally a set of faults, or faulty events, occurring in a trajectory of the DES that conforms with a given sequence of observations. As such, a candidate does not embed any temporal relationship among faults, nor does it account for multiple occurrences of the same fault. To improve diagnostic explanation and support decision making, a sequence-oriented perspective to diagnosis of DESs is presented, where a candidate is a sequence of faults occurring in a trajectory of the DES, called a fault sequence. Since a fault sequence is possibly unbounded, as the same fault may occur an unlimited number of times in the trajectory, the set of (output) candidates may be unbounded also, which contrasts with set-oriented diagnosis, where the set of candidates is bounded by the powerset of the domain of faults. Still, a possibly unbounded set of fault sequences is shown to be a regular language, which can be defined by a regular expression over the domain of faults, a property that makes sequence-oriented diagnosis feasible in practice. The task of monitoring-based diagnosis is considered, where a new candidate set is generated at the occurrence of each observation. The approach is based on three different techniques: (1) blind diagnosis, with no compiled knowledge, (2) greedy diagnosis, with total knowledge compilation, and (3) lazy diagnosis, with partial knowledge compilation. By knowledge we mean a data structure slightly similar to a classical DES diagnoser, which can be generated (compiled) either entirely offline (greedy diagnosis) or incrementally online (lazy diagnosis). Experimental evidence suggests that, among these techniques, only lazy diagnosis may be viable in non-trivial application domains

    A More General Theory of Diagnosis from First Principles

    Full text link
    Model-based diagnosis has been an active research topic in different communities including artificial intelligence, formal methods, and control. This has led to a set of disparate approaches addressing different classes of systems and seeking different forms of diagnoses. In this paper, we resolve such disparities by generalising Reiter's theory to be agnostic to the types of systems and diagnoses considered. This more general theory of diagnosis from first principles defines the minimal diagnosis as the set of preferred diagnosis candidates in a search space of hypotheses. Computing the minimal diagnosis is achieved by exploring the space of diagnosis hypotheses, testing sets of hypotheses for consistency with the system's model and the observation, and generating conflicts that rule out successors and other portions of the search space. Under relatively mild assumptions, our algorithms correctly compute the set of preferred diagnosis candidates. The main difficulty here is that the search space is no longer a powerset as in Reiter's theory, and that, as consequence, many of the implicit properties (such as finiteness of the search space) no longer hold. The notion of conflict also needs to be generalised and we present such a more general notion. We present two implementations of these algorithms, using test solvers based on satisfiability and heuristic search, respectively, which we evaluate on instances from two real world discrete event problems. Despite the greater generality of our theory, these implementations surpass the special purpose algorithms designed for discrete event systems, and enable solving instances that were out of reach of existing diagnosis approaches

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Lessons learned: Symbiotic autonomous robot ecosystem for nuclear environments

    Get PDF
    Nuclear facilities have a regulatory requirement to measure radiation levels within Post Operational Cleanout (POCO) around nuclear facilities each year, resulting in a trend towards robotic deployments to gain an improved understanding during nuclear decommissioning phases. The UK Nuclear Decommissioning Authority supports the view that human-in-the-loop robotic deployments are a solution to improve procedures and reduce risks within radiation haracterisation of nuclear sites. We present a novel implementation of a Cyber-Physical System (CPS) deployed in an analogue nuclear environment, comprised of a multi-robot team coordinated by a human-in-the-loop operator through a digital twin interface. The development of the CPS created efficient partnerships across systems including robots, digital systems and human. This was presented as a multi-staged mission within an inspection scenario for the heterogeneous Symbiotic Multi-Robot Fleet (SMuRF). Symbiotic interactions were achieved across the SMuRF where robots utilised automated collaborative governance to work together where a single robot would face challenges in full characterisation of radiation. Key contributions include the demonstration of symbiotic autonomy and query-based learning of an autonomous mission supporting scalable autonomy and autonomy as a service. The coordination of the CPS was a success and displayed further challenges and improvements related to future multi-robot fleets

    Industry 4.0: product digital twins for remanufacturing decision-making

    Get PDF
    Currently there is a desire to reduce natural resource consumption and expand circular business principles whilst Industry 4.0 (I4.0) is regarded as the evolutionary and potentially disruptive movement of technology, automation, digitalisation, and data manipulation into the industrial sector. The remanufacturing industry is recognised as being vital to the circular economy (CE) as it extends the in-use life of products, but its synergy with I4.0 has had little attention thus far. This thesis documents the first investigating into I4.0 in remanufacturing for a CE contributing a design and demonstration of a model that optimises remanufacturing planning using data from different instances in a product’s life cycle. The initial aim of this work was to identify the I4.0 technology that would enhance the stability in remanufacturing with a view to reducing resource consumption. As the project progressed it narrowed to focus on the development of a product digital twin (DT) model to support data-driven decision making for operations planning. The model’s architecture was derived using a bottom-up approach where requirements were extracted from the identified complications in production planning and control that differentiate remanufacturing from manufacturing. Simultaneously, the benefits of enabling visibility of an asset’s through-life health were obtained using a DT as the modus operandi. A product simulator and DT prototype was designed to use Internet of Things (IoT) components, a neural network for remaining life estimations and a search algorithm for operational planning optimisation. The DT was iteratively developed using case studies to validate and examine the real opportunities that exist in deploying a business model that harnesses, and commodifies, early life product data for end-of-life processing optimisation. Findings suggest that using intelligent programming networks and algorithms, a DT can enhance decision-making if it has visibility of the product and access to reliable remanufacturing process information, whilst existing IoT components provide rudimentary “smart” capabilities, but their integration is complex, and the durability of the systems over extended product life cycles needs to be further explored

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Data Science Techniques for Modelling Execution Tracing Quality

    Get PDF
    This research presents how to handle a research problem when the research variables are still unknown, and no quantitative study is possible; how to identify the research variables, to be able to perform a quantitative research, how to collect data by means of the research variables identified, and how to carry out modelling with the considerations of the specificities of the problem domain. In addition, validation is also encompassed in the scope of modelling in the current study. Thus, the work presented in this thesis comprises the typical stages a complex data science problem requires, including qualitative and quantitative research, data collection, modelling of vagueness and uncertainty, and the leverage of artificial intelligence to gain such insights, which are impossible with traditional methods. The problem domain of the research conducted encompasses software product quality modelling, and assessment, with particular focus on execution tracing quality. The terms execution tracing quality and logging are used interchangeably throughout the thesis. The research methods and mathematical tools used allow considering uncertainty and vagueness inherently associated with the quality measurement and assessment process through which reality can be approximated more appropriately in comparison to plain statistical modelling techniques. Furthermore, the modelling approach offers direct insights into the problem domain by the application of linguistic rules, which is an additional advantage. The thesis reports (1) an in-depth investigation of all the identified software product quality models, (2) a unified summary of the identified software product quality models with their terminologies and concepts, (3) the identification of the variables influencing execution tracing quality, (4) the quality model constructed to describe execution tracing quality, and (5) the link of the constructed quality model to the quality model of the ISO/IEC 25010 standard, with the possibility of tailoring to specific project needs. Further work, outside the frames of this PhD thesis, would also be useful as presented in the study: (1) to define application-project profiles to assist tailoring the quality model for execution tracing to specific application and project domains, and (2) to approximate the present quality model for execution tracing, within defined bounds, by simpler mathematical approaches. In conclusion, the research contributes to (1) supporting the daily work of software professionals, who need to analyse execution traces; (2) raising awareness that execution tracing quality has a huge impact on software development, software maintenance and on the professionals involved in the different stages of the software development life-cycle; (3) providing a framework in which the present endeavours for log improvements can be placed, and (4) suggesting an extension of the ISO/IEC 25010 standard by linking the constructed quality model to that. In addition, in the scope of the qualitative research methodology, the current PhD thesis contributes to the knowledge of research methods with determining a saturation point in the course of the data collection process

    Flexible Automation and Intelligent Manufacturing: The Human-Data-Technology Nexus

    Get PDF
    This is an open access book. It gathers the first volume of the proceedings of the 31st edition of the International Conference on Flexible Automation and Intelligent Manufacturing, FAIM 2022, held on June 19 – 23, 2022, in Detroit, Michigan, USA. Covering four thematic areas including Manufacturing Processes, Machine Tools, Manufacturing Systems, and Enabling Technologies, it reports on advanced manufacturing processes, and innovative materials for 3D printing, applications of machine learning, artificial intelligence and mixed reality in various production sectors, as well as important issues in human-robot collaboration, including methods for improving safety. Contributions also cover strategies to improve quality control, supply chain management and training in the manufacturing industry, and methods supporting circular supply chain and sustainable manufacturing. All in all, this book provides academicians, engineers and professionals with extensive information on both scientific and industrial advances in the converging fields of manufacturing, production, and automation

    Sensors Fault Diagnosis Trends and Applications

    Get PDF
    Fault diagnosis has always been a concern for industry. In general, diagnosis in complex systems requires the acquisition of information from sensors and the processing and extracting of required features for the classification or identification of faults. Therefore, fault diagnosis of sensors is clearly important as faulty information from a sensor may lead to misleading conclusions about the whole system. As engineering systems grow in size and complexity, it becomes more and more important to diagnose faulty behavior before it can lead to total failure. In the light of above issues, this book is dedicated to trends and applications in modern-sensor fault diagnosis
    corecore