3,244 research outputs found

    Malfunction diagnosis in industrial process systems using data mining for knowledge discovery

    Get PDF
    The determination of abnormal behavior at process industries gains increasing interest as strict regulations and highly competitive operation conditions are regularly applied at the process systems. A synergetic approach in exploring the behavior of industrial processes is proposed, targeting at the discovery of patterns and implement fault detection (malfunction) diagnosis. The patterns are based on highly correlated time series. The concept is based on the fact that if independent time series are combined based on rules, we can extract scenarios of functional and non-functional situations so as to monitor hazardous procedures occurring in workplaces. The selected methods combine and apply actions on historically stored, experimental data from a chemical pilot plant, located at CERTH/CPERI. The implementation of the clustering and classification methods showed promising results of determining with great accuracy (97%) the potential abnormal situations

    Improving data preparation for the application of process mining

    Get PDF
    Immersed in what is already known as the fourth industrial revolution, automation and data exchange are taking on a particularly relevant role in complex environments, such as industrial manufacturing environments or logistics. This digitisation and transition to the Industry 4.0 paradigm is causing experts to start analysing business processes from other perspectives. Consequently, where management and business intelligence used to dominate, process mining appears as a link, trying to build a bridge between both disciplines to unite and improve them. This new perspective on process analysis helps to improve strategic decision making and competitive capabilities. Process mining brings together data and process perspectives in a single discipline that covers the entire spectrum of process management. Through process mining, and based on observations of their actual operations, organisations can understand the state of their operations, detect deviations, and improve their performance based on what they observe. In this way, process mining is an ally, occupying a large part of current academic and industrial research. However, although this discipline is receiving more and more attention, it presents severe application problems when it is implemented in real environments. The variety of input data in terms of form, content, semantics, and levels of abstraction makes the execution of process mining tasks in industry an iterative, tedious, and manual process, requiring multidisciplinary experts with extensive knowledge of the domain, process management, and data processing. Currently, although there are numerous academic proposals, there are no industrial solutions capable of automating these tasks. For this reason, in this thesis by compendium we address the problem of improving business processes in complex environments thanks to the study of the state-of-the-art and a set of proposals that improve relevant aspects in the life cycle of processes, from the creation of logs, log preparation, process quality assessment, and improvement of business processes. Firstly, for this thesis, a systematic study of the literature was carried out in order to gain an in-depth knowledge of the state-of-the-art in this field, as well as the different challenges faced by this discipline. This in-depth analysis has allowed us to detect a number of challenges that have not been addressed or received insufficient attention, of which three have been selected and presented as the objectives of this thesis. The first challenge is related to the assessment of the quality of input data, known as event logs, since the requeriment of the application of techniques for improving the event log must be based on the level of quality of the initial data, which is why this thesis presents a methodology and a set of metrics that support the expert in selecting which technique to apply to the data according to the quality estimation at each moment, another challenge obtained as a result of our analysis of the literature. Likewise, the use of a set of metrics to evaluate the quality of the resulting process models is also proposed, with the aim of assessing whether improvement in the quality of the input data has a direct impact on the final results. The second challenge identified is the need to improve the input data used in the analysis of business processes. As in any data-driven discipline, the quality of the results strongly depends on the quality of the input data, so the second challenge to be addressed is the improvement of the preparation of event logs. The contribution in this area is the application of natural language processing techniques to relabel activities from textual descriptions of process activities, as well as the application of clustering techniques to help simplify the results, generating more understandable models from a human point of view. Finally, the third challenge detected is related to the process optimisation, so we contribute with an approach for the optimisation of resources associated with business processes, which, through the inclusion of decision-making in the creation of flexible processes, enables significant cost reductions. Furthermore, all the proposals made in this thesis are validated and designed in collaboration with experts from different fields of industry and have been evaluated through real case studies in public and private projects in collaboration with the aeronautical industry and the logistics sector

    Semantic Support for Log Analysis of Safety-Critical Embedded Systems

    Full text link
    Testing is a relevant activity for the development life-cycle of Safety Critical Embedded systems. In particular, much effort is spent for analysis and classification of test logs from SCADA subsystems, especially when failures occur. The human expertise is needful to understand the reasons of failures, for tracing back the errors, as well as to understand which requirements are affected by errors and which ones will be affected by eventual changes in the system design. Semantic techniques and full text search are used to support human experts for the analysis and classification of test logs, in order to speedup and improve the diagnosis phase. Moreover, retrieval of tests and requirements, which can be related to the current failure, is supported in order to allow the discovery of available alternatives and solutions for a better and faster investigation of the problem.Comment: EDCC-2014, BIG4CIP-2014, Embedded systems, testing, semantic discovery, ontology, big dat

    Intelligent Anomaly Detection of Machine Tools based on Mean Shift Clustering

    Get PDF
    For a fault detection of machine tools, fixed intervention thresholds are usually necessary. In order to provide an autonomous anomaly detection without the need for fixed limits, recurring patterns must be detected in the signal data. This paper presents an approach for online pattern recognition on NC Code based on mean shift clustering that will be matched with drive signals. The intelligent fault detection system learns individual intervention thresholds based on the prevailing machining patterns. Using a self-organizing map, data captured during the machine’s operation are assigned to a normal or malfunction state

    Life Cycle Modelling and Design Knowledge Development in 3D Virtual Environments

    Get PDF
    Experience plays an important role in building management. “How often will this asset need repair?” or “How much time is this repair going to take?” are types of questions that project and facility managers face daily in planning activities. Failure or success in developing good schedules, budgets and other project management tasks depend on the project manager's ability to obtain reliable information to be able to answer these types of questions. Young practitioners tend to rely on information that is based on regional averages and provided by publishing companies. This is in contrast to experienced project managers who tend to rely heavily on personal experience. Another aspect of building management is that many practitioners are seeking to improve available scheduling algorithms, estimating spreadsheets and other project management tools. Such “micro-scale” levels of research are important in providing the required tools for the project manager's tasks. However, even with such tools, low quality input information will produce inaccurate schedules and budgets as output. Thus, it is also important to have a broad approach to research at a more “macro-scale.” Recent trends show that the Architectural, Engineering, Construction (AEC) industry is experiencing explosive growth in its capabilities to generate and collect data. There is a great deal of valuable knowledge that can be obtained from the appropriate use of this data and therefore the need has arisen to analyse this increasing amount of available data. Data Mining can be applied as a powerful tool to extract relevant and useful information from this sea of data. Knowledge Discovery in Databases (KDD) and Data Mining (DM) are tools that allow identification of valid, useful, and previously unknown patterns so large amounts of project data may be analysed. These technologies combine techniques from machine learning, artificial intelligence, pattern recognition, statistics, databases, and visualization to automatically extract concepts, interrelationships, and patterns of interest from large databases. The project involves the development of a prototype tool to support facility managers, building owners and designers. This Industry focused report presents the AIMMTM prototype system and documents how and what data mining techniques can be applied, the results of their application and the benefits gained from the system. The AIMMTM system is capable of searching for useful patterns of knowledge and correlations within the existing building maintenance data to support decision making about future maintenance operations. The application of the AIMMTM prototype system on building models and their maintenance data (supplied by industry partners) utilises various data mining algorithms and the maintenance data is analysed using interactive visual tools. The application of the AIMMTM prototype system to help in improving maintenance management and building life cycle includes: (i) data preparation and cleaning, (ii) integrating meaningful domain attributes, (iii) performing extensive data mining experiments in which visual analysis (using stacked histograms), classification and clustering techniques, associative rule mining algorithm such as “Apriori” and (iv) filtering and refining data mining results, including the potential implications of these results for improving maintenance management. Maintenance data of a variety of asset types were selected for demonstration with the aim of discovering meaningful patterns to assist facility managers in strategic planning and provide a knowledge base to help shape future requirements and design briefing. Utilising the prototype system developed here, positive and interesting results regarding patterns and structures of data have been obtained

    Diagnosing faults in autonomous robot plan execution

    Get PDF
    A major requirement for an autonomous robot is the capability to diagnose faults during plan execution in an uncertain environment. Many diagnostic researches concentrate only on hardware failures within an autonomous robot. Taking a different approach, the implementation of a Telerobot Diagnostic System that addresses, in addition to the hardware failures, failures caused by unexpected event changes in the environment or failures due to plan errors, is described. One feature of the system is the utilization of task-plan knowledge and context information to deduce fault symptoms. This forward deduction provides valuable information on past activities and the current expectations of a robotic event, both of which can guide the plan-execution inference process. The inference process adopts a model-based technique to recreate the plan-execution process and to confirm fault-source hypotheses. This technique allows the system to diagnose multiple faults due to either unexpected plan failures or hardware errors. This research initiates a major effort to investigate relationships between hardware faults and plan errors, relationships which were not addressed in the past. The results of this research will provide a clear understanding of how to generate a better task planner for an autonomous robot and how to recover the robot from faults in a critical environment

    Bayesian Network Analysis for Diagnostics and Prognostics of Engineering Systems

    Get PDF
    Bayesian networks have been applied to many different domains to perform prognostics, reduce risk and ultimately improve decision making. However, these methods have not been applied to military field and human performance data sets in an industrial environment. Methods frequently rely on a clear understanding of causal connections leading to an undesirable event and detailed understanding of the system behavior. Methods may also require large amount of analyst teams and domain experts, coupled with manual data cleansing and classification. The research performed utilized machine learning algorithms (such as Bayesian networks) and two existing data sets. The primary objective of the research was to develop a diagnostic and prognostic tool utilizing Bayesian networks that does not require the need for detailed causal understanding of the underlying system. The research yielded a predictive method with substantial benefits over reactive methods. The research indicated Bayesian networks can be trained and utilized to predict failure of several important components to include potential malfunction codes and downtime on a real-world Navy data set. The research also considered potential error within the training data set. The results provided credence to utilization of Bayesian networks in real field data – which will always contain error that is not easily quantified. Research should be replicated with additional field data sets from other aircraft. Future research should be conducted to solicit and incorporate domain expertise into subsequent models. Research should also consider incorporation of text based analytics for text fields, which was considered out of scope for this research project

    Proactive Buildings: A Prescriptive Maintenance Approach

    Get PDF
    Prescriptive maintenance has recently attracted a lot of scientific attention. It integrates the advantages of descriptive and predictive analytics to automate the process of detecting non nominal device functionality. Implementing such proactive measures in home or industrial settings may improve equipment dependability and minimize operational expenses. There are several techniques for prescriptive maintenance in diverse use cases, but none elaborates on a general methodology that permits successful prescriptive analysis for small size industrial or residential settings. This study reports on prescriptive analytics, while assessing recent research efforts on multi-domain prescriptive maintenance. Given the existing state of the art, the main contribution of this work is to propose a broad framework for prescriptive maintenance that may be interpreted as a high-level approach for enabling proactive buildings

    Establishment of a novel predictive reliability assessment strategy for ship machinery

    Get PDF
    There is no doubt that recent years, maritime industry is moving forward to novel and sophisticated inspection and maintenance practices. Nowadays maintenance is encountered as an operational method, which can be employed both as a profit generating process and a cost reduction budget centre through an enhanced Operation and Maintenance (O&M) strategy. In the first place, a flexible framework to be applicable on complex system level of machinery can be introduced towards ship maintenance scheduling of systems, subsystems and components.;This holistic inspection and maintenance notion should be implemented by integrating different strategies, methodologies, technologies and tools, suitably selected by fulfilling the requirements of the selected ship systems. In this thesis, an innovative maintenance strategy for ship machinery is proposed, namely the Probabilistic Machinery Reliability Assessment (PMRA) strategy focusing towards the reliability and safety enhancement of main systems, subsystems and maintainable units and components.;In this respect, the combination of a data mining method (k-means), the manufacturer safety aspects, the dynamic state modelling (Markov Chains), the probabilistic predictive reliability assessment (Bayesian Belief Networks) and the qualitative decision making (Failure Modes and Effects Analysis) is employed encompassing the benefits of qualitative and quantitative reliability assessment. PMRA has been clearly demonstrated in two case studies applied on offshore platform oil and gas and selected ship machinery.;The results are used to identify the most unreliability systems, subsystems and components, while advising suitable practical inspection and maintenance activities. The proposed PMRA strategy is also tested in a flexible sensitivity analysis scheme.There is no doubt that recent years, maritime industry is moving forward to novel and sophisticated inspection and maintenance practices. Nowadays maintenance is encountered as an operational method, which can be employed both as a profit generating process and a cost reduction budget centre through an enhanced Operation and Maintenance (O&M) strategy. In the first place, a flexible framework to be applicable on complex system level of machinery can be introduced towards ship maintenance scheduling of systems, subsystems and components.;This holistic inspection and maintenance notion should be implemented by integrating different strategies, methodologies, technologies and tools, suitably selected by fulfilling the requirements of the selected ship systems. In this thesis, an innovative maintenance strategy for ship machinery is proposed, namely the Probabilistic Machinery Reliability Assessment (PMRA) strategy focusing towards the reliability and safety enhancement of main systems, subsystems and maintainable units and components.;In this respect, the combination of a data mining method (k-means), the manufacturer safety aspects, the dynamic state modelling (Markov Chains), the probabilistic predictive reliability assessment (Bayesian Belief Networks) and the qualitative decision making (Failure Modes and Effects Analysis) is employed encompassing the benefits of qualitative and quantitative reliability assessment. PMRA has been clearly demonstrated in two case studies applied on offshore platform oil and gas and selected ship machinery.;The results are used to identify the most unreliability systems, subsystems and components, while advising suitable practical inspection and maintenance activities. The proposed PMRA strategy is also tested in a flexible sensitivity analysis scheme
    • …
    corecore