294 research outputs found
Power transformer dissolved gas analysis through Bayesian networks and hypothesis testing
Accurate diagnosis of power transformers is critical for the reliable and cost-effective operation of the power grid. Presently there are a range of methods and analytical models for transformer fault diagnosis based on dissolved gas analysis. However, these methods give conflicting results and they are not able to generate uncertainty information associated with the diagnostics outcome. In this situation it is not always clear which model is the most accurate. This paper presents a novel multiclass probabilistic diagnosis framework for dissolved gas analysis based on Bayesian networks and hypothesis testing. Bayesian network models embed expert knowledge, learn patterns from data and infer the uncertainty associated with the diagnostics outcome, and hypothesis testing aids in the data selection process. The effectiveness of the proposed framework is validated using the IEC TC 10 dataset and is shown to have a maximum diagnosis accuracy of 88.9%
Reliability assessment of manufacturing systems: A comprehensive overview, challenges and opportunities
Reliability assessment refers to the process of evaluating reliability of components or systems during their lifespan or prior to their implementation. In the manufacturing industry, the reliability of systems is directly linked to production efficiency, product quality, energy consumption, and other crucial performance indicators. Therefore, reliability plays a critical role in every aspect of manufacturing. In this review, we provide a comprehensive overview of the most significant advancements and trends in the assessment of manufacturing system reliability. For this, we also consider the three main facets of reliability analysis of cyber–physical systems, i.e., hardware, software, and human-related reliability. Beyond the overview of literature, we derive challenges and opportunities for reliability assessment of manufacturing systems based on the reviewed literature. Identified challenges encompass aspects like failure data availability and quality, fast-paced technological advancements, and the increasing complexity of manufacturing systems. In turn, the opportunities include the potential for integrating various assessment methods, and leveraging data to automate the assessment process and to increase accuracy of derived reliability models
Data mining in manufacturing: a review based on the kind of knowledge
In modern manufacturing environments, vast amounts of data are collected in database management systems and data warehouses from all involved areas, including product and process design, assembly, materials planning, quality control, scheduling, maintenance, fault detection etc. Data mining has emerged as an important tool for knowledge acquisition from the manufacturing databases. This paper reviews the literature dealing with knowledge discovery and data mining applications in the broad domain of manufacturing with a special emphasis on the type of functions to be performed on the data. The major data mining functions to be performed include characterization and description, association, classification, prediction, clustering and evolution analysis. The papers reviewed have therefore been categorized in these five categories. It has been shown that there is a rapid growth in the application of data mining in the context of manufacturing processes and enterprises in the last 3 years. This review reveals the progressive applications and existing gaps identified in the context of data mining in manufacturing. A novel text mining approach has also been used on the abstracts and keywords of 150 papers to identify the research gaps and find the linkages between knowledge area, knowledge type and the applied data mining tools and techniques
Prescriptive System for Reconfigurable Manufacturing Systems considering Variable Demand and Production Rates
O mercado atual é dinâmico criando a necessidade de resposta a mudanças imprevisíveis de mercado por parte das empresas de forma a permanecerem competitivas. Para lidar com a mudança de paradigma, de produção em massa para customização em massa, a flexibilidade de manufatura é crucial. A atual digitalização da indústria proporciona novas oportunidades em relação a sistemas de apoio à decisão em tempo real permitindo que as empresas tomem decisões estratégicas e obtenham vantagem competitiva e valor comercial acrescido.
Nesta dissertação pretende-se implementar um Sistema Prescritivo que sugere sequências de throughputs tendo em consideração objetivos de produção semanais e falhas em equipamentos num contexto de Manufatura Reconfigurável.
O Sistema Prescritivo proposto é constituído por dois módulos: Simulação do ambiente de manufatura e o optimizador. O módulo de simulação é modelado com base em teoria de grafos e o optimizador com base em Algoritmos Genéticos. O seu output é uma sequência de throughputs que equilibram da melhor forma as ações de manutenção e produtividade. De forma a avaliar os indivíduos gerados pelo algoritmo genético, estes são aplicados ao primeiro módulo e o seu impacto no sistema de produção analisado.
O sistema apresentado mostra notáveis melhorias na mitigação dos efeitos de downtime das máquinas durante a produção. As métricas utilizadas na medição do desempenho do sistema são a variação na produção de peças em relação ao target, descrito nesta dissertação como diferencial, e disponibilidade de produção do sistema. Todos os testes realizados apresentam um diferencial consideravelmente melhor e em certas instâncias, a disponibilidade aumenta ligeiramente.
Não obstante, ainda que os resultados obtidos nas configurações testadas sejam robustos, necessita de mais estudos de modo a que seja possível a generalização dos resultados obtidos ao longo desta dissertação.The current market is dynamic and, consequently, industries need to be able to meet unpredictable market changes in order to remain competitive. To address the change in paradigm, from mass production to mass customization, manufacturing flexibility is key. Moreover, the current digitalization opens opportunities regarding real-time decision support systems allowing the companies to make strategic decisions and gain competitive advantage and business value.
The aim of this dissertation is to implement a Prescriptive System that suggests sequences of throughputs that take into consideration weekly production targets and machine failures applied to Reconfigurable Manufacturing Systems.
The Prescriptive System is mainly composed of two modules: manufacturing environment simulation and optimizer. The simulation module is modeled based on graph theory and the second one on Genetic Algorithms. Its output is a sequence of throughputs that best balances maintenance actions and productivity. In order to evaluate the individuals generated by the algorithm, candidate solutions are fed to the first module and their impact on the production system assessed.
The proposed Prescriptive System shows large improvements in the mitigation of machines downtime effects in productivity when compared without any optimization approach. The metrics used to measure the performance of the system are the variation of pieces produced in relation to target, named in the current dissertation as differential, and Availability of the production system. In all tests performed, the differential largely improved and, in some instances, the availability slightly increased.
Despite the robust results obtained in the tested configurations, further research should be conducted in order to be able to generalize the obtained results in this dissertation to non-tested configurations
Recommended from our members
Bayesian inference and failure analysis for risk assessment in quality engineering
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonFailure is the state of not achieving a desired or intended goal. Failure analysis
planning in the context of risk assessment is an approach that helps to reduce total
cost, increase production capacity, and produce higher-quality products. One of
the most common issues that businesses confront are defective products. This issue
not only results in monetary loss, but also in a loss of status. Companies must
improve their production quality and reduce the quantity of faulty products in order
to continue operating in a healthy and profitable manner in today’s very competitive
environment. On the other hand, there is the ongoing COVID-19 pandemic, which
has thrown the world’s natural order into disarray, and has been designated a Public
Health Emergency of International Concern by the World Health Organization. The
demand for quality control is rapidly increasing. Failure analysis is thus an useful
tool for identifying common failures, their likely causes, and their impact on the
health system, as well as plotting strategies to limit COVID-19 transmission. It is
now more vital than ever to enhance failure analysis methods.
The traditional FMEA (Failure mode and effects analysis) is one of the most
widely used approaches for identifying and classifying failure modes (FMs) and
failure causes (FCs). It is a risk analysis tool for coping with possible failures and is
widely used in the reliability engineering, safety engineering and quality engineering.
To prioritize risks of different failure modes, FMEA uses the risk priority number
(RPN), which is the product of three risk measures: severity (S), occurrence (O) and
detection (D). Traditional FMEA, on the other hand, has drawbacks, such as the
inability to cope with uncertain failure data, such as expert subjective evaluations,
the failure events’ conditionality, RPN has a high degree of subjectivity, comparing
various RPNs is challenging, potential errors may be ignored in the conventional
FMEA process, etc. To overcome these limitations, I present an integrated Bayesian approach to FMEA in this thesis.
In this proposed approach, I worked with experts in quality engineering and
used Bayesian inference to estimate the FMEA risk parameters: S, O and D. The
proposed approach is intended to become more practical and less subjective as more
data is added. Bayesian statistics is a statistical theory that is based on the Bayesian
interpretation of probability, which states that probability expresses a degree of
belief or information (knowledge) about an event. Bayesian statistics addresses the
issues with uncertainties found in frequentist statistics, such as the distribution of
contributing factors, the implications of using specific distributions and specifies that
there is some prior probability. A prior can be derived from previous information,
such as previous experiments, but it can also be derived from a trained subject-matter
expert’s purely subjective assessment. Frequentist statistics (also known as classical
statistics) has several limitations, including a lack of uncertainty information in
predictions, no built-in regularisation, and no consideration of prior knowledge. Due
to the availability of powerful computers and new algorithms, Bayesian methods
have seen increased use within statistics in the twenty-first century, and this thesis
highlights the effective use of Bayesian analyses to address the shortcomings of the
current FMEA with the revamped Bayesian FMEA. As a demonstration of the
approach, three case studies are presented.
The first case study is a Bayesian risk assessment approach of the modified SEIR
(susceptible-exposed-infectious-recovered) model for the transmission dynamics of
COVID-19 with an exponentially distributed. The effective reproduction number
is estimated based on laboratory-confirmed cases and death data using Bayesian
inference and analyse the impact of the community spread of COVID-19 across the
United Kingdom. The value of effective reproduction number models the average
number of infections caused by a case of an infectious disease in a population that
includes not only susceptible people. The FMEA is then applied to evaluate the
effectiveness of the action measures taken to manage the COVID-19 pandemic. In
the FMEA, the focus was on COVID-19 infections and therefore the failure mode
is taken as positive cases. The model is applied to COVID-19 data showing the
effectiveness of interventions adopted to control the epidemic by reducing the effective
reproduction number of COVID-19. The risk measures were estimated from the case fatality rate (S), the posterior median of the effective reproduction number (O) and
the current corrective measures used by government policies (D).
The second case study is a Bayesian risk assessment of a coordinate measuring
machine (CMM) process using failure mode, effects and criticality analysis (FMECA)
and an augmented form error model. The form error is defined as the deviation of a
manufactured part from its design or ideal shape, and it is a key characteristic to
evaluate in quality engineering and manufacturing. The form error is presented as
a probabilistic model using symmetric unimodal distributions. Bayesian inference
is then used to identify influence factors associated with the measurement process
due to form error, environmental, human and random effects. A risk assessment is
then performed by combining Bayesian inference, FMECA and conformity testing, to
quantify and minimise the risk of wrong decisions. In the FMECA, the focus was on
CMM measurement process and I identified four major FMs that can occur: probe,
mechanical, environmental and measurement performance failure. Eleven FCs were
also observed, each of which was linked to one of the four FMs. The risk measures
were estimated from the posterior probability of failure causes associated with the
CMM measurement process (O), the severity of a specific consumer’s risk (S) and
the detectability of failures from the posterior standard deviation of the form error
model (D).
The third case study is a Bayesian risk assessment of a CMM measurement
process using an autoregressive (AR) form error model and a combined Fault tree
analysis (FTA) and FMEA approach to predict significant failure modes and causes.
The main idea is to estimate and predict the form error based on CMM data using
Gibbs sampling and analyse the impact of the CMM measurement process on product
conformity testing. The FTA is used to compare the actual and predicted form error
data from the Bayesian AR plot to determine the likelihood of the CMM measurement
process failing using binary data. The acquired binary data is then classified into
four states (true positive, true negative, false positive, and false negative) using
a confusion matrix, which is subsequently utilized to calculate key classification
measures (i.e., error rate, prediction rate, prevalence rate, sensitivity rate, etc). The
classification measures were then used to assess the FMEA risk measures S, O, and
D, which were critical for determining the RPN and making decisions. Analytical and numerical methods are used in all case studies to highlight the
practical implications of our findings and are meant to be practical without complex
computing. The proposed methodologies can find applications in numerous disciplines
and wide quality engineering
Adaptive and Online Health Monitoring System for Autonomous Aircraft
Good situation awareness is one of the key attributes required to maintain safe flight, especially for an Unmanned Aerial System (UAS). Good situation awareness can be achieved by incorporating an Adaptive Health Monitoring System (AHMS) to the aircraft. The AHMS monitors the flight outcome or flight behaviours of the aircraft based on its external environmental conditions and the behaviour of its internal systems. The AHMS does this by associating a health value to the aircraft's behaviour based on the progression of its sensory values produced by the aircraft's modules, components and/or subsystems. The AHMS indicates erroneous flight behaviour when a deviation to this health information is produced. This will be useful for a UAS because the pilot is taken out of the control loop and is unaware of how the environment and/or faults are affecting the behaviour of the aircraft. The autonomous pilot can use this health information to help produce safer and securer flight behaviour or fault tolerance to the aircraft. This allows the aircraft to fly safely in whatever the environmental conditions. This health information can also be used to help increase the endurance of the aircraft. This paper describes how the AHMS performs its capabilities
Integrated Frameworks for Effective Multi-criteria Decision Making in Reliability Centred Maintenance of Industrial Machines
No abstract availabl
A survey of AI in operations management from 2005 to 2009
Purpose: the use of AI for operations management, with its ability to evolve solutions, handle uncertainty and perform optimisation continues to be a major field of research. The growing body of publications over the last two decades means that it can be difficult to keep track of what has been done previously, what has worked, and what really needs to be addressed. Hence this paper presents a survey of the use of AI in operations management aimed at presenting the key research themes, trends and directions of research.
Design/methodology/approach: the paper builds upon our previous survey of this field which was carried out for the ten-year period 1995-2004. Like the previous survey, it uses Elsevier’s Science Direct database as a source. The framework and methodology adopted for the survey is kept as similar as possible to enable continuity and comparison of trends. Thus, the application categories adopted are: design; scheduling; process planning and control; and quality, maintenance and fault diagnosis. Research on utilising neural networks, case-based reasoning (CBR), fuzzy logic (FL), knowledge-Based systems (KBS), data mining, and hybrid AI in the four application areas are identified.
Findings: the survey categorises over 1,400 papers, identifying the uses of AI in the four categories of operations management and concludes with an analysis of the trends, gaps and directions for future research. The findings include: the trends for design and scheduling show a dramatic increase in the use of genetic algorithms since 2003 that reflect recognition of their success in these areas; there is a significant decline in research on use of KBS, reflecting their transition into practice; there is an
increasing trend in the use of FL in quality, maintenance and fault diagnosis; and there are surprising gaps in the use of CBR and hybrid methods in operations management that offer opportunities for future research.
Design/methodology/approach: the paper builds upon our previous survey of this field which was carried out for the 10 year period 1995 to 2004 (Kobbacy et al. 2007). Like the previous survey, it uses the Elsevier’s ScienceDirect database as a source. The framework and methodology adopted for the survey is kept as similar as possible to enable continuity and comparison of trends. Thus the application categories adopted are: (a) design, (b) scheduling, (c) process planning and control and (d) quality, maintenance and fault diagnosis. Research on utilising neural networks, case based reasoning, fuzzy logic, knowledge based systems, data mining, and hybrid AI in the four application areas are identified.
Findings: The survey categorises over 1400 papers, identifying the uses of AI in the four categories of operations management and concludes with an analysis of the trends, gaps and directions for future research. The findings include: (a) The trends for Design and Scheduling show a dramatic increase in the use of GAs since 2003-04 that reflect recognition of their success in these areas, (b) A significant decline in research on use of KBS, reflecting their transition into practice, (c) an increasing trend in the use of fuzzy logic in Quality, Maintenance and Fault Diagnosis, (d) surprising gaps in the use of CBR and hybrid methods in operations management that offer opportunities for future research.
Originality/value: This is the largest and most comprehensive study to classify research on the use of AI in operations management to date. The survey and trends identified provide a useful reference point and directions for future research
RISK PRIORITY EVALUATION OF POWER TRANSFORMER PARTS BASED ON HYBRID FMEA FRAMEWORK UNDER HESITANT FUZZY ENVIRONMENT
The power transformer is one of the most critical facilities in the power system, and its running status directly impacts the power system's security. It is essential to research the risk priority evaluation of the power transformer parts. Failure mode and effects analysis (FMEA) is a methodology for analyzing the potential failure modes (FMs) within a system in various industrial devices. This study puts forward a hybrid FMEA framework integrating novel hesitant fuzzy aggregation tools and CRITIC (Criteria Importance Through Inter-criteria Correlation) method. In this framework, the hesitant fuzzy sets (HFSs) are used to depict the uncertainty in risk evaluation. Then, an improved HFWA (hesitant fuzzy weighted averaging) operator is adopted to fuse risk evaluation for FMEA experts. This aggregation manner can consider different lengths of HFSs and the support degrees among the FMEA experts. Next, the novel HFWGA (hesitant fuzzy weighted geometric averaging) operator with CRITIC weights is developed to determine the risk priority of each FM. This method can satisfy the multiplicative characteristic of the RPN (risk priority number) method of the conventional FMEA model and reflect the correlations between risk indicators. Finally, a real example of the risk priority evaluation of power transformer parts is given to show the applicability and feasibility of the proposed hybrid FMEA framework. Comparison and sensitivity studies are also offered to verify the effectiveness of the improved risk assessment approach
- …