14,663 research outputs found
Readiness of Quantum Optimization Machines for Industrial Applications
There have been multiple attempts to demonstrate that quantum annealing and,
in particular, quantum annealing on quantum annealing machines, has the
potential to outperform current classical optimization algorithms implemented
on CMOS technologies. The benchmarking of these devices has been controversial.
Initially, random spin-glass problems were used, however, these were quickly
shown to be not well suited to detect any quantum speedup. Subsequently,
benchmarking shifted to carefully crafted synthetic problems designed to
highlight the quantum nature of the hardware while (often) ensuring that
classical optimization techniques do not perform well on them. Even worse, to
date a true sign of improved scaling with the number of problem variables
remains elusive when compared to classical optimization techniques. Here, we
analyze the readiness of quantum annealing machines for real-world application
problems. These are typically not random and have an underlying structure that
is hard to capture in synthetic benchmarks, thus posing unexpected challenges
for optimization techniques, both classical and quantum alike. We present a
comprehensive computational scaling analysis of fault diagnosis in digital
circuits, considering architectures beyond D-wave quantum annealers. We find
that the instances generated from real data in multiplier circuits are harder
than other representative random spin-glass benchmarks with a comparable number
of variables. Although our results show that transverse-field quantum annealing
is outperformed by state-of-the-art classical optimization algorithms, these
benchmark instances are hard and small in the size of the input, therefore
representing the first industrial application ideally suited for testing
near-term quantum annealers and other quantum algorithmic strategies for
optimization problems.Comment: 22 pages, 12 figures. Content updated according to Phys. Rev. Applied
versio
Approximate Model-Based Diagnosis Using Greedy Stochastic Search
We propose a StochAstic Fault diagnosis AlgoRIthm, called SAFARI, which
trades off guarantees of computing minimal diagnoses for computational
efficiency. We empirically demonstrate, using the 74XXX and ISCAS-85 suites of
benchmark combinatorial circuits, that SAFARI achieves several
orders-of-magnitude speedup over two well-known deterministic algorithms, CDA*
and HA*, for multiple-fault diagnoses; further, SAFARI can compute a range of
multiple-fault diagnoses that CDA* and HA* cannot. We also prove that SAFARI is
optimal for a range of propositional fault models, such as the widely-used
weak-fault models (models with ignorance of abnormal behavior). We discuss the
optimality of SAFARI in a class of strong-fault circuit models with stuck-at
failure modes. By modeling the algorithm itself as a Markov chain, we provide
exact bounds on the minimality of the diagnosis computed. SAFARI also displays
strong anytime behavior, and will return a diagnosis after any non-trivial
inference time
Diagnosability and detectability of multiple faults in nonlinear models
This paper presents a novel method for assessing multiple fault
diagnosability and detectability of nonlinear parametrized dynamical models.
This method is based on computer algebra algorithms which return precomputed
values of algebraic expressions characterizing the presence of some constant
multiple fault(s). Estimations of these expressions, obtained from inputs and
outputs measurements, permit then the detection and the isolation of multiple
faults acting on the system. This method applied on a coupled water-tank model
attests the relevance of the suggested approach
Optimal discrimination between transient and permanent faults
An important practical problem in fault diagnosis is discriminating between permanent faults and transient faults. In many computer systems, the majority of errors are due to transient faults. Many heuristic methods have been used for discriminating between transient and permanent faults; however, we have found no previous work stating this decision problem in clear probabilistic terms. We present an optimal procedure for discriminating between transient and permanent faults, based on applying Bayesian inference to the observed events (correct and erroneous results). We describe how the assessed probability that a module is permanently faulty must vary with observed symptoms. We describe and demonstrate our proposed method on a simple application problem, building the appropriate equations and showing numerical examples. The method can be implemented as a run-time diagnosis algorithm at little computational cost; it can also be used to evaluate any heuristic diagnostic procedure by compariso
Any time probabilistic reasoning for sensor validation
For many real time applications, it is important to validate the information received form the sensors before entering higher levels of reasoning. This paper presents an any time probabilistic algorithm for validating the information provided by sensors. The system consists of two Bayesian network models. The first one is a model of the dependencies between sensors and it is used to validate each sensor. It provides a list of potentially faulty
sensors. To isolate the real faults, a second Bayesian network is used, which relates the potential faults with the real faults. This second model is also used to make the validation algorithm any time, by validating first the sensors that provide more information. To select the next sensor to validate, and measure the quality of the results at each stage, an entropy function is used. This function captures in a single quantity both the certainty and specificity measures of any time algorithms.
Together, both models constitute a mechanism for validating
sensors in an any time fashion, providing at each step the
probability of correct/faulty for each sensor, and the total quality of theresults. The algorithm has been tested in the validation of temperature sensors of a power plant
Any Time Probabilistic Reasoning for Sensor Validation
For many real time applications, it is important to validate the information
received from the sensors before entering higher levels of reasoning. This
paper presents an any time probabilistic algorithm for validating the
information provided by sensors. The system consists of two Bayesian network
models. The first one is a model of the dependencies between sensors and it is
used to validate each sensor. It provides a list of potentially faulty sensors.
To isolate the real faults, a second Bayesian network is used, which relates
the potential faults with the real faults. This second model is also used to
make the validation algorithm any time, by validating first the sensors that
provide more information. To select the next sensor to validate, and measure
the quality of the results at each stage, an entropy function is used. This
function captures in a single quantity both the certainty and specificity
measures of any time algorithms. Together, both models constitute a mechanism
for validating sensors in an any time fashion, providing at each step the
probability of correct/faulty for each sensor, and the total quality of the
results. The algorithm has been tested in the validation of temperature sensors
of a power plant.Comment: Appears in Proceedings of the Fourteenth Conference on Uncertainty in
Artificial Intelligence (UAI1998
Parallel Equivalence Class Sorting: Algorithms, Lower Bounds, and Distribution-Based Analysis
We study parallel comparison-based algorithms for finding all equivalence
classes of a set of elements, where sorting according to some total order
is not possible. Such scenarios arise, for example, in applications, such as in
distributed computer security, where each of agents are working to identify
the private group to which they belong, with the only operation available to
them being a zero-knowledge pairwise-comparison (which is sometimes called a
"secret handshake") that reveals only whether two agents are in the same group
or in different groups. We provide new parallel algorithms for this problem, as
well as new lower bounds and distribution-based analysis
Plant-wide fault and disturbance screening using combined transfer entropy and eigenvector centrality analysis
Finding the source of a disturbance or fault in complex systems such as
industrial chemical processing plants can be a difficult task and consume a
significant number of engineering hours. In many cases, a systematic
elimination procedure is considered to be the only feasible approach but can
cause undesired process upsets. Practitioners desire robust alternative
approaches.
This paper presents an unsupervised, data-driven method for ranking process
elements according to the magnitude and novelty of their influence. Partial
bivariate transfer entropy estimation is used to infer a weighted directed
graph of process elements. Eigenvector centrality is applied to rank network
nodes according to their overall effect. As the ranking of process elements
rely on emerging properties that depend on the aggregate of many connections,
the results are robust to errors in the estimation of individual edge
properties and the inclusion of indirect connections that do not represent the
true causal structure of the process.
A monitoring chart of continuously calculated process element importance
scores over multiple overlapping time regions can assist with incipient fault
detection. Ranking results combined with visual inspection of information
transfer networks is also useful for root cause analysis of known faults and
disturbances. A software implementation of the proposed method is available.Comment: 21 pages, 9 figure
Machine learning for cognitive networks : technology assessment and research challenges
The field of machine learning has made major strides over the last 20 years. This document summarizes the major problem formulations that the discipline has studied, then reviews three tasks in cognitive networking and briefly discusses how aspects of those tasks fit these formulations. After this, it discusses challenges for machine learning research raised by Knowledge Plane applications and closes with proposals for the evaluation of learning systems developed for these problems
Machine learning and its applications in reliability analysis systems
In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA
- …