102,360 research outputs found
Contextual normalization applied to aircraft gas turbine engine diagnosis
Diagnosing faults in aircraft gas turbine engines is a complex problem. It involves several tasks,
including rapid and accurate interpretation of patterns in engine sensor data. We have investigated
contextual normalization for the development of a software tool to help engine repair technicians
with interpretation of sensor data. Contextual normalization is a new strategy for employing
machine learning. It handles variation in data that is due to contextual factors, rather than the
health of the engine. It does this by normalizing the data in a context-sensitive manner. This
learning strategy was developed and tested using 242 observations of an aircraft gas turbine
engine in a test cell, where each observation consists of roughly 12,000 numbers, gathered over a
12 second interval. There were eight classes of observations: seven deliberately implanted classes
of faults and a healthy class. We compared two approaches to implementing our learning strategy:
linear regression and instance-based learning. We have three main results. (1) For the given
problem, instance-based learning works better than linear regression. (2) For this problem,
contextual normalization works better than other common forms of normalization. (3) The
algorithms described here can be the basis for a useful software tool for assisting technicians with
the interpretation of sensor data
Integrating Learning from Examples into the Search for Diagnostic Policies
This paper studies the problem of learning diagnostic policies from training
examples. A diagnostic policy is a complete description of the decision-making
actions of a diagnostician (i.e., tests followed by a diagnostic decision) for
all possible combinations of test results. An optimal diagnostic policy is one
that minimizes the expected total cost, which is the sum of measurement costs
and misdiagnosis costs. In most diagnostic settings, there is a tradeoff
between these two kinds of costs. This paper formalizes diagnostic decision
making as a Markov Decision Process (MDP). The paper introduces a new family of
systematic search algorithms based on the AO* algorithm to solve this MDP. To
make AO* efficient, the paper describes an admissible heuristic that enables
AO* to prune large parts of the search space. The paper also introduces several
greedy algorithms including some improvements over previously-published
methods. The paper then addresses the question of learning diagnostic policies
from examples. When the probabilities of diseases and test results are computed
from training data, there is a great danger of overfitting. To reduce
overfitting, regularizers are integrated into the search algorithms. Finally,
the paper compares the proposed methods on five benchmark diagnostic data sets.
The studies show that in most cases the systematic search methods produce
better diagnostic policies than the greedy methods. In addition, the studies
show that for training sets of realistic size, the systematic search algorithms
are practical on todays desktop computers
Multi-Agent Cooperation for Particle Accelerator Control
We present practical investigations in a real industrial controls environment
for justifying theoretical DAI (Distributed Artificial Intelligence) results,
and we discuss theoretical aspects of practical investigations for
accelerator control and operation. A generalized hypothesis is introduced,
based on a unified view of control, monitoring, diagnosis, maintenance and
repair tasks leading to a general method of cooperation for expert systems
by exchanging hypotheses. This has been tested for task and result sharing
cooperation scenarios. Generalized hypotheses also allow us to treat the
repetitive diagnosis-recovery cycle as task sharing cooperation. Problems
with such a loop or even recursive calls between the different agents are
discussed
Process: program for research on operator control in an experimental simulated setting
An experimental tool for the investigation of human control behavior of slowly responding dynamic systems is described. Process (Program for Research on Operator Control in an Experimental Simulated Setting) is a simulation of a dynamic water-alcohol distillation system that is especially useful in research on operator training. In particular, Process was developed to conduct research on fault management skill
- …