8 research outputs found

    Multivariable Based Decision-making for the Maintenance Strategy of Process Equipment

    Get PDF
    Nowadays, several pieces of equipment are running over their expected life-time. An equipment revamping could solve the situation, but, it is often not possible for economical reasons, regulatory constraints, etc.. The aging of the equipment can also cause safety problems: between 1980 and 2006, the Health and Safety Executive estimated that around 28% of the major incidents occurred in the period, corresponding to 96 accidents, could be traced back to plant aging. These accidents costed more than 17,000,000 € (Horrocks et al., 2010). A correct maintenance of the equipment can extend the plant life, increase the plant efficiency and maintain an adequate level of safety. Plant management can choose among different maintenance strategies. The choice can be influenced by parameters as: the maintenance cost, the equipment condition before the maintenance, the lack of production cost, the safety of the operator during the maintenance and during the normal operations. In this paper, a multivariable Fuzzy approach is proposed in order to support the decision between different maintenance strategies through the analysis of their peculiarities, helping the management to weight the pros and cons of the alternatives. This approach is applied to a case study related to the maintenance of process equipment: it highlighted that the full refurbishment of a turbine blades system is a maintenance approach as valid as the current maintenance procedure, while the adoption of new technologies resulted not convenient

    Predictive long-term asset maintenance strategy: development of a fuzzy logic condition-based control system

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business IntelligenceTechnology has accelerated the growth of the Facility Management industry and its roles are broadening to encompass more responsibilities and skill sets. FM budgets and teams are becoming larger and more impactful as new technological trends are incorporated into data-driven strategies. This new scenario has motivated institutions such as the European Central Bank to initiate projects aimed at optimising the use of data to improve the monitoring, control and preservation of the assets that enable the continuity of the Bank's activities. Such projects make it possible to reduce costs, plan, manage and allocate resources, reinforce the control, and efficiency of safety and operational systems. To support the long-term maintenance strategy being developed by the Technical Facility Management section of the ECB, this thesis proposes a model to calculate the Left wear margin of the equipment. This is accomplished through the development of an algorithm based on a fuzzy logic system that uses Python language and presents the system's structure, its reliability, feasibility, potential, and limitations. For Facility Management, this project constitutes a cornerstone of the ongoing digital transformation program

    Anomaly Detection Using Autoencoder Reconstruction upon Industrial Motors

    Get PDF
    Rotary machine breakdown detection systems are outdated and dependent upon routine testing to discover faults. This is costly and often reactive in nature. Real-time monitoring offers a solution for detecting faults without the need for manual observation. However, manual interpretation for threshold anomaly detection is often subjective and varies between industrial experts. This approach is ridged and prone to a large number of false positives. To address this issue, we propose a machine learning (ML) approach to model normal working operations and detect anomalies. The approach extracts key features from signals representing a known normal operation to model machine behaviour and automatically identify anomalies. The ML learns generalisations and generates thresholds based on fault severity. This provides engineers with a traffic light system where green is normal behaviour, amber is worrying and red signifies a machine fault. This scale allows engineers to undertake early intervention measures at the appropriate time. The approach is evaluated on windowed real machine sensor data to observe normal and abnormal behaviour. The results demonstrate that it is possible to detect anomalies within the amber range and raise alarms before machine failure

    Anomaly Detection Using Autoencoder Reconstruction upon Industrial Motors

    Get PDF
    Rotary machine breakdown detection systems are outdated and dependent upon routine testing to discover faults. This is costly and often reactive in nature. Real-time monitoring offers a solution for detecting faults without the need for manual observation. However, manual interpretation for threshold anomaly detection is often subjective and varies between industrial experts. This approach is ridged and prone to a large number of false positives. To address this issue, we propose a machine learning (ML) approach to model normal working operations and detect anomalies. The approach extracts key features from signals representing a known normal operation to model machine behaviour and automatically identify anomalies. The ML learns generalisations and generates thresholds based on fault severity. This provides engineers with a traffic light system where green is normal behaviour, amber is worrying and red signifies a machine fault. This scale allows engineers to undertake early intervention measures at the appropriate time. The approach is evaluated on windowed real machine sensor data to observe normal and abnormal behaviour. The results demonstrate that it is possible to detect anomalies within the amber range and raise alarms before machine failure

    Towards multi-model approaches to predictive maintenance: A systematic literature survey on diagnostics and prognostics

    Get PDF
    The use of a modern technological system requires a good engineering approach, optimized operations, and proper maintenance in order to keep the system in an optimal state. Predictive maintenance focuses on the organization of maintenance actions according to the actual health state of the system, aiming at giving a precise indication of when a maintenance intervention will be necessary. Predictive maintenance is normally implemented by means of specialized computational systems that incorporate one of several models to fulfil diagnostics and prognostics tasks. As complexity of technological systems increases over time, single-model approaches hardly fulfil all functions and objectives for predictive maintenance systems. It is increasingly common to find research studies that combine different models in multi-model approaches to overcome complexity of predictive maintenance tasks, considering the advantages and disadvantages of each single model and trying to combine the best of them. These multi-model approaches have not been extensively addressed by previous review studies on predictive maintenance. Besides, many of the possible combinations for multi-model approaches remain unexplored in predictive maintenance applications; this offers a vast field of opportunities when architecting new predictive maintenance systems. This systematic survey aims at presenting the current trends in diagnostics and prognostics giving special attention to multi-model approaches and summarizing the current challenges and research opportunities

    Data Normalization in Decision Making Processes

    Get PDF
    With the fast-growing of data-rich systems, dealing with complex decision problems is unavoidable. Normalization is a crucial step in most multi criteria decision making (MCDM) models, to produce comparable and dimensionless data from heterogeneous data. Further, MCDM requires data to be numerical and comparable to be aggregated into a single score per alternative, thus providing their ranking. Several normalization techniques are available, but their performance depends on a number of characteristics of the problem at hand i.e., different normalization techniques may provide different rankings for alternatives. Therefore, it is a challenge to select a suitable normalization technique to represent an appropriate mapping from source data to a common scale. There are some attempts in the literature to address the subject of normalization in MCDM, but there is still a lack of assessment frameworks for evaluating normalization techniques. Hence, the main contribution and objective of this study is to develop an assessment framework for analysing the effects of normalization techniques on ranking of alternatives in MCDM methods and recommend the most appropriate technique for specific decision problems. The proposed assessment framework consists of four steps: (i) determining data types; (ii) chose potential candidate normalization techniques; (iii) analysis and evaluation of techniques; and (iv) selection of the best normalization technique. To validate the efficiency and robustness of the proposed framework, six normalization techniques (Max, Max-Min, Sum, Vector, Logarithmic, and Fuzzification) are selected from linear, semi-linear, and non-linear categories, and tested with four well known MCDM methods (TOPSIS, SAW, AHP, and ELECTRE), from scoring, comparative, and ranking methods. Designing the proposed assessment framework led to a conceptual model allowing an automatic decision-making process, besides recommending the most appropriate normalization technique for MCDM problems. Furthermore, the role of normalization techniques for dynamic multi criteria decision making (DMCDM) in collaborative networks is explored, specifically related to problems of selection of suppliers, business partners, resources, etc. To validate and test the utility and applicability of the assessment framework, a number of case studies are discussed and benchmarking and testimonies from experts are used. Also, an evaluation by the research community of the work developed is presented. The validation process demonstrated that the proposed assessment framework increases the accuracy of results in MCDM decision problems.Com o rápido crescimento dos sistemas ricos em dados, lidar com problemas de decisão complexos é inevitável. A normalização é uma etapa crucial na maioria dos modelos de tomada de decisão multicritério (MCDM), para produzir dados comparáveis e adimensionais a partir de dados heterogéneos, porque os dados precisam ser numéricos e comparáveis para serem agregados em uma única pontuação por alternativa. Como tal, várias técnicas de normalização estão disponíveis, mas o seu desempenho depende de uma série de características do problema em questão, ou seja, diferentes técnicas de normalização podem resultar em diferentes classificações para as alternativas. Portanto, é um desafio selecionar uma técnica de normalização adequada para representar o mapeamento dos dados de origem para uma escala comum. Existem algumas tentativas na literatura de abordar o assunto da normalização, mas ainda há uma falta de estrutura de avaliação para avaliar as técnicas de normalização sobre qual técnica é mais apropriada para os métodos MCDM.Assim, a principal contribuição e objetivo deste estudo são desenvolver uma ferramenta de avaliação para analisar os efeitos das técnicas de normalização na seriação de alternativas em métodos MCDM e recomendar a técnica mais adequada para problemas de decisão específicos. A estrutura de avaliação da ferramenta proposta consiste em quatro etapas: (i) determinar os tipos de dados, (ii) selecionar potenciais técnicas de normalização, (iii) análise e avaliação de técnicas em problemas de MCDM, e (iv) recomendação da melhor técnica para o problema de decisão. Para validar a eficácia e robustez da ferramenta proposta, seis técnicas de normalização (Max, Max-Min, Sum, Vector, Logarithmic e Fuzzification) foram selecionadas - das categorias lineares, semilineares e não lineares- e quatro conhecidos métodos de MCDM foram escolhidos (TOPSIS, SAW, AHP e ELECTRE). O desenho da ferramenta de avaliação proposta levou ao modelo conceptual que forneceu um processo automático de tomada de decisão, além de recomendar a técnica de normalização mais adequada para problemas de decisão. Além disso, é explorado o papel das técnicas de normalização para tomada de decisão multicritério dinâmica (DMCDM) em redes colaborativas, especificamente relacionadas com problemas de seleção de fornecedores, parceiros de negócios, recursos, etc. Para validar e testar a utilidade e aplicabilidade da ferramenta de avaliação, uma série de casos de estudo são discutidos e benchmarking e testemunhos de especialistas são usados. Além disso, uma avaliação do trabalho desenvolvido pela comunidade de investigação também é apresentada. Esta validação demonstrou que a ferramenta proposta aumenta a precisão dos resultados em problemas de decisão multicritério
    corecore