407 research outputs found
Dynamic structure identification of Bayesian network model for fault diagnosis of FMS
International audienceThis paper proposes an approach to accurately localize the origin of product quality drifts, in a flexible manufacturing system (FMS). The logical diagnosis model is used to reduce the search space of suspected equipment in the production flow; however, it does not help in accurately localizing the faulty equipment. In the proposed approach, we model this reduced search space as a Bayesian network that uses historical data to compute conditional probabilities for each suspected equipment. This approach helps in making accurate decisions on localizing the cause for product quality drifts as either one of the equipment in production flow or product itself
Diagnosis in complex system with multiple failure sources
International audienceThis paper proposes an approach to accurately localize the origin of product quality drifts, in a flexible manufacturing system (FMS). The failure propagation mechanism in a production process is proposed based on the relationships between failure sources to explain the failure propagation following production flow. The logical diagnosis model is used to reduce the search space of suspected equipment in the production flow; however, it does not help in accurately localizing the faulty equipment. In the proposed approach, we model this reduced search space as a Bayesian network that uses historical data to compute conditional probabilities for each suspected equipment. This approach helps in making accurate decisions on localizing the cause for product quality drifts as either one of the equipment in production flow or product itself
Reliability assessment of manufacturing systems: A comprehensive overview, challenges and opportunities
Reliability assessment refers to the process of evaluating reliability of components or systems during their lifespan or prior to their implementation. In the manufacturing industry, the reliability of systems is directly linked to production efficiency, product quality, energy consumption, and other crucial performance indicators. Therefore, reliability plays a critical role in every aspect of manufacturing. In this review, we provide a comprehensive overview of the most significant advancements and trends in the assessment of manufacturing system reliability. For this, we also consider the three main facets of reliability analysis of cyber–physical systems, i.e., hardware, software, and human-related reliability. Beyond the overview of literature, we derive challenges and opportunities for reliability assessment of manufacturing systems based on the reviewed literature. Identified challenges encompass aspects like failure data availability and quality, fast-paced technological advancements, and the increasing complexity of manufacturing systems. In turn, the opportunities include the potential for integrating various assessment methods, and leveraging data to automate the assessment process and to increase accuracy of derived reliability models
A Bayesian approach to robust identification: application to fault detection
In the Control Engineering field, the so-called Robust Identification techniques deal with the problem of obtaining not only a nominal model of the plant, but also an estimate of the uncertainty associated to the nominal model. Such model of uncertainty is typically characterized as a region in the parameter space or as an uncertainty band around the frequency response of the nominal model.
Uncertainty models have been widely used in the design of robust controllers and, recently, their use in model-based fault detection procedures is increasing. In this later case, consistency between new measurements and the uncertainty region is checked. When an inconsistency is found, the existence of a fault is decided.
There exist two main approaches to the modeling of model uncertainty: the deterministic/worst case methods and the stochastic/probabilistic methods. At present, there are a number of different methods, e.g., model error modeling, set-membership identification and non-stationary stochastic embedding. In this dissertation we summarize the main procedures and illustrate their results by means of several examples of the literature.
As contribution we propose a Bayesian methodology to solve the robust identification problem. The approach is highly unifying since many robust identification techniques can be interpreted as particular cases of the Bayesian framework. Also, the methodology can deal with non-linear structures such as the ones derived from the use of observers. The obtained Bayesian uncertainty models are used to detect faults in a quadruple-tank process and in a three-bladed wind turbine
Effective Maintenance by Reducing Failure-Cause Misdiagnosis in Semiconductor Industry (SI)
Increasing demand diversity and volume in semiconductor industry (SI) have resulted in shorter product life cycles. This competitive environment, with high-mix low-volume production, requires sustainable production capacities that can be achieved by reducing unscheduled equipment breakdowns. The fault detection and classification (FDC) is a well-known approach, used in the SI, to improve and stabilize the production capacities. This approach models equipment as a single unit and uses sensors data to identify equipment failures against product and process drifts. Besides its successful deployment for years, recent increase in unscheduled equipment breakdown needs an improved methodology to ensure sustainable capacities. The analysis on equipment utilization, using data collected from a world reputed semiconductor manufacturer, shows that failure durations as well as number of repair actions in each failure have significantly increased. This is an evidence of misdiagnosis in the identification of failures and prediction of its likely causes. In this paper, we propose two lines of defense against unstable and reducing production capacities. First, equipment should be stopped only if it is suspected as a source for product and process drifts whereas second defense line focuses on more accurate identification of failures and detection of associated causes. The objective is to facilitate maintenance engineers for more accurate decisions about failures and repair actions, upon an equipment stoppage. In the proposed methodology, these two lines of defense are modeled as Bayesian network (BN) with unsupervised learning of structure using data collected from the variables (classified as symptoms) across production, process, equipment and maintenance databases. The proofs of concept demonstrate that contextual or statistical information other than FDC sensor signals, used as symptoms, provide reliable information (posterior probabilities) to find the source of product/process quality drifts, a.k.a. failure modes (FM), as well as potential failure and causes. The reliability and learning curves concludes that modeling equipment at module level than equipment offers 45% more accurate diagnosis. The said approach contributes in reducing not only the failure durations but also the number of repair actions that has resulted in recent increase in unstable production capacities and unscheduled equipment breakdowns
Fault diagnosis for IP-based network with real-time conditions
BACKGROUND:
Fault diagnosis techniques have been based on many paradigms, which derive from diverse areas
and have different purposes: obtaining a representation model of the network for fault localization,
selecting optimal probe sets for monitoring network devices, reducing fault detection time, and
detecting faulty components in the network. Although there are several solutions for diagnosing
network faults, there are still challenges to be faced: a fault diagnosis solution needs to always be
available and able enough to process data timely, because stale results inhibit the quality and speed
of informed decision-making. Also, there is no non-invasive technique to continuously diagnose the
network symptoms without leaving the system vulnerable to any failures, nor a resilient technique
to the network's dynamic changes, which can cause new failures with different symptoms.
AIMS:
This thesis aims to propose a model for the continuous and timely diagnosis of IP-based networks
faults, independent of the network structure, and based on data analytics techniques.
METHOD(S):
This research's point of departure was the hypothesis of a fault propagation phenomenon that
allows the observation of failure symptoms at a higher network level than the fault origin. Thus, for
the model's construction, monitoring data was collected from an extensive campus network in
which impact link failures were induced at different instants of time and with different duration.
These data correspond to widely used parameters in the actual management of a network. The
collected data allowed us to understand the faults' behavior and how they are manifested at a
peripheral level.
Based on this understanding and a data analytics process, the first three modules of our model,
named PALADIN, were proposed (Identify, Collection and Structuring), which define the data
collection peripherally and the necessary data pre-processing to obtain the description of the
network's state at a given moment. These modules give the model the ability to structure the data
considering the delays of the multiple responses that the network delivers to a single monitoring
probe and the multiple network interfaces that a peripheral device may have.
Thus, a structured data stream is obtained, and it is ready to be analyzed. For this analysis, it was
necessary to implement an incremental learning framework that respects networks' dynamic
nature. It comprises three elements, an incremental learning algorithm, a data rebalancing strategy,
and a concept drift detector. This framework is the fourth module of the PALADIN model named
Diagnosis.
In order to evaluate the PALADIN model, the Diagnosis module was implemented with 25 different
incremental algorithms, ADWIN as concept-drift detector and SMOTE (adapted to streaming scenario) as the rebalancing strategy. On the other hand, a dataset was built through the first
modules of the PALADIN model (SOFI dataset), which means that these data are the incoming data
stream of the Diagnosis module used to evaluate its performance.
The PALADIN Diagnosis module performs an online classification of network failures, so it is a
learning model that must be evaluated in a stream context. Prequential evaluation is the most used
method to perform this task, so we adopt this process to evaluate the model's performance over
time through several stream evaluation metrics.
RESULTS:
This research first evidences the phenomenon of impact fault propagation, making it possible to
detect fault symptoms at a monitored network's peripheral level. It translates into non-invasive
monitoring of the network. Second, the PALADIN model is the major contribution in the fault
detection context because it covers two aspects. An online learning model to continuously process
the network symptoms and detect internal failures. Moreover, the concept-drift detection and
rebalance data stream components which make resilience to dynamic network changes possible.
Third, it is well known that the amount of available real-world datasets for imbalanced stream
classification context is still too small. That number is further reduced for the networking context.
The SOFI dataset obtained with the first modules of the PALADIN model contributes to that number
and encourages works related to unbalanced data streams and those related to network fault
diagnosis.
CONCLUSIONS:
The proposed model contains the necessary elements for the continuous and timely diagnosis of IPbased
network faults; it introduces the idea of periodical monitorization of peripheral network
elements and uses data analytics techniques to process it. Based on the analysis, processing, and
classification of peripherally collected data, it can be concluded that PALADIN achieves the
objective. The results indicate that the peripheral monitorization allows diagnosing faults in the
internal network; besides, the diagnosis process needs an incremental learning process, conceptdrift
detection elements, and rebalancing strategy.
The results of the experiments showed that PALADIN makes it possible to learn from the network
manifestations and diagnose internal network failures. The latter was verified with 25 different
incremental algorithms, ADWIN as concept-drift detector and SMOTE (adapted to streaming
scenario) as the rebalancing strategy.
This research clearly illustrates that it is unnecessary to monitor all the internal network elements
to detect a network's failures; instead, it is enough to choose the peripheral elements to be
monitored. Furthermore, with proper processing of the collected status and traffic descriptors, it is
possible to learn from the arriving data using incremental learning in cooperation with data
rebalancing and concept drift approaches. This proposal continuously diagnoses the network
symptoms without leaving the system vulnerable to failures while being resilient to the network's
dynamic changes.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: José Manuel Molina López.- Secretario: Juan Carlos Dueñas López.- Vocal: Juan Manuel Corchado Rodrígue
Data mining in manufacturing: a review based on the kind of knowledge
In modern manufacturing environments, vast amounts of data are collected in database management systems and data warehouses from all involved areas, including product and process design, assembly, materials planning, quality control, scheduling, maintenance, fault detection etc. Data mining has emerged as an important tool for knowledge acquisition from the manufacturing databases. This paper reviews the literature dealing with knowledge discovery and data mining applications in the broad domain of manufacturing with a special emphasis on the type of functions to be performed on the data. The major data mining functions to be performed include characterization and description, association, classification, prediction, clustering and evolution analysis. The papers reviewed have therefore been categorized in these five categories. It has been shown that there is a rapid growth in the application of data mining in the context of manufacturing processes and enterprises in the last 3 years. This review reveals the progressive applications and existing gaps identified in the context of data mining in manufacturing. A novel text mining approach has also been used on the abstracts and keywords of 150 papers to identify the research gaps and find the linkages between knowledge area, knowledge type and the applied data mining tools and techniques
Recommended from our members
Bayesian inference and failure analysis for risk assessment in quality engineering
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonFailure is the state of not achieving a desired or intended goal. Failure analysis
planning in the context of risk assessment is an approach that helps to reduce total
cost, increase production capacity, and produce higher-quality products. One of
the most common issues that businesses confront are defective products. This issue
not only results in monetary loss, but also in a loss of status. Companies must
improve their production quality and reduce the quantity of faulty products in order
to continue operating in a healthy and profitable manner in today’s very competitive
environment. On the other hand, there is the ongoing COVID-19 pandemic, which
has thrown the world’s natural order into disarray, and has been designated a Public
Health Emergency of International Concern by the World Health Organization. The
demand for quality control is rapidly increasing. Failure analysis is thus an useful
tool for identifying common failures, their likely causes, and their impact on the
health system, as well as plotting strategies to limit COVID-19 transmission. It is
now more vital than ever to enhance failure analysis methods.
The traditional FMEA (Failure mode and effects analysis) is one of the most
widely used approaches for identifying and classifying failure modes (FMs) and
failure causes (FCs). It is a risk analysis tool for coping with possible failures and is
widely used in the reliability engineering, safety engineering and quality engineering.
To prioritize risks of different failure modes, FMEA uses the risk priority number
(RPN), which is the product of three risk measures: severity (S), occurrence (O) and
detection (D). Traditional FMEA, on the other hand, has drawbacks, such as the
inability to cope with uncertain failure data, such as expert subjective evaluations,
the failure events’ conditionality, RPN has a high degree of subjectivity, comparing
various RPNs is challenging, potential errors may be ignored in the conventional
FMEA process, etc. To overcome these limitations, I present an integrated Bayesian approach to FMEA in this thesis.
In this proposed approach, I worked with experts in quality engineering and
used Bayesian inference to estimate the FMEA risk parameters: S, O and D. The
proposed approach is intended to become more practical and less subjective as more
data is added. Bayesian statistics is a statistical theory that is based on the Bayesian
interpretation of probability, which states that probability expresses a degree of
belief or information (knowledge) about an event. Bayesian statistics addresses the
issues with uncertainties found in frequentist statistics, such as the distribution of
contributing factors, the implications of using specific distributions and specifies that
there is some prior probability. A prior can be derived from previous information,
such as previous experiments, but it can also be derived from a trained subject-matter
expert’s purely subjective assessment. Frequentist statistics (also known as classical
statistics) has several limitations, including a lack of uncertainty information in
predictions, no built-in regularisation, and no consideration of prior knowledge. Due
to the availability of powerful computers and new algorithms, Bayesian methods
have seen increased use within statistics in the twenty-first century, and this thesis
highlights the effective use of Bayesian analyses to address the shortcomings of the
current FMEA with the revamped Bayesian FMEA. As a demonstration of the
approach, three case studies are presented.
The first case study is a Bayesian risk assessment approach of the modified SEIR
(susceptible-exposed-infectious-recovered) model for the transmission dynamics of
COVID-19 with an exponentially distributed. The effective reproduction number
is estimated based on laboratory-confirmed cases and death data using Bayesian
inference and analyse the impact of the community spread of COVID-19 across the
United Kingdom. The value of effective reproduction number models the average
number of infections caused by a case of an infectious disease in a population that
includes not only susceptible people. The FMEA is then applied to evaluate the
effectiveness of the action measures taken to manage the COVID-19 pandemic. In
the FMEA, the focus was on COVID-19 infections and therefore the failure mode
is taken as positive cases. The model is applied to COVID-19 data showing the
effectiveness of interventions adopted to control the epidemic by reducing the effective
reproduction number of COVID-19. The risk measures were estimated from the case fatality rate (S), the posterior median of the effective reproduction number (O) and
the current corrective measures used by government policies (D).
The second case study is a Bayesian risk assessment of a coordinate measuring
machine (CMM) process using failure mode, effects and criticality analysis (FMECA)
and an augmented form error model. The form error is defined as the deviation of a
manufactured part from its design or ideal shape, and it is a key characteristic to
evaluate in quality engineering and manufacturing. The form error is presented as
a probabilistic model using symmetric unimodal distributions. Bayesian inference
is then used to identify influence factors associated with the measurement process
due to form error, environmental, human and random effects. A risk assessment is
then performed by combining Bayesian inference, FMECA and conformity testing, to
quantify and minimise the risk of wrong decisions. In the FMECA, the focus was on
CMM measurement process and I identified four major FMs that can occur: probe,
mechanical, environmental and measurement performance failure. Eleven FCs were
also observed, each of which was linked to one of the four FMs. The risk measures
were estimated from the posterior probability of failure causes associated with the
CMM measurement process (O), the severity of a specific consumer’s risk (S) and
the detectability of failures from the posterior standard deviation of the form error
model (D).
The third case study is a Bayesian risk assessment of a CMM measurement
process using an autoregressive (AR) form error model and a combined Fault tree
analysis (FTA) and FMEA approach to predict significant failure modes and causes.
The main idea is to estimate and predict the form error based on CMM data using
Gibbs sampling and analyse the impact of the CMM measurement process on product
conformity testing. The FTA is used to compare the actual and predicted form error
data from the Bayesian AR plot to determine the likelihood of the CMM measurement
process failing using binary data. The acquired binary data is then classified into
four states (true positive, true negative, false positive, and false negative) using
a confusion matrix, which is subsequently utilized to calculate key classification
measures (i.e., error rate, prediction rate, prevalence rate, sensitivity rate, etc). The
classification measures were then used to assess the FMEA risk measures S, O, and
D, which were critical for determining the RPN and making decisions. Analytical and numerical methods are used in all case studies to highlight the
practical implications of our findings and are meant to be practical without complex
computing. The proposed methodologies can find applications in numerous disciplines
and wide quality engineering
Cross layer reliability estimation for digital systems
Forthcoming manufacturing technologies hold the promise to increase multifuctional computing systems performance and functionality thanks to a remarkable growth of the device integration density. Despite the benefits introduced by this technology improvements, reliability is becoming a key challenge for the semiconductor industry. With transistor size reaching the atomic dimensions, vulnerability to unavoidable fluctuations in the manufacturing process and environmental stress rise dramatically. Failing to meet a reliability requirement may add excessive re-design cost to recover and may have severe consequences on the success of a product. %Worst-case design with large margins to guarantee reliable operation has been employed for long time. However, it is reaching a limit that makes it economically unsustainable due to its performance, area, and power cost.
One of the open challenges for future technologies is building ``dependable'' systems on top of unreliable components, which will degrade and even fail during normal lifetime of the chip. Conventional design techniques are highly inefficient. They expend significant amount of energy to tolerate the device unpredictability by adding safety margins to a circuit's operating voltage, clock frequency or charge stored per bit. Unfortunately, the additional cost introduced to compensate unreliability are rapidly becoming unacceptable in today's environment where power consumption is often the limiting factor for integrated circuit performance, and energy efficiency is a top concern.
Attention should be payed to tailor techniques to improve the reliability of a system on the basis of its requirements, ending up with cost-effective solutions favoring the success of the product on the market. Cross-layer reliability is one of the most promising approaches to achieve this goal. Cross-layer reliability techniques take into account the interactions between the layers composing a complex system (i.e., technology, hardware and software layers) to implement efficient cross-layer fault mitigation mechanisms. Fault tolerance mechanism are carefully implemented at different layers starting from the technology up to the software layer to carefully optimize the system by exploiting the inner capability of each layer to mask lower level faults.
For this purpose, cross-layer reliability design techniques need to be complemented with cross-layer reliability evaluation tools, able to precisely assess the reliability level of a selected design early in the design cycle. Accurate and early reliability estimates would enable the exploration of the system design space and the optimization of multiple constraints such as performance, power consumption, cost and reliability.
This Ph.D. thesis is devoted to the development of new methodologies and tools to evaluate and optimize the reliability of complex digital systems during the early design stages. More specifically, techniques addressing hardware accelerators (i.e., FPGAs and GPUs), microprocessors and full systems are discussed. All developed methodologies are presented in conjunction with their application to real-world use cases belonging to different computational domains
- …