29 research outputs found
On the construction of hierarchic models
One of the main problems in the field of model-based diagnosis of technical systems today is finding the most useful model or models of the system being diagnosed. Often, a model showing the physical components and the connections between them is all that is available. As systems grow larger and larger, the run-time performance of diagnostic algorithms decreases considerably when using these detailed models. A solution to this problem is using a hierarchic model. This allows us to first diagnose the system using an abstract model, and then use this solution to guide the diagnostic process using a more detailed model. The main problem with this approach is acquiring the hierarchic model. We give a generic hierarchic diagnostic algorithm and show how the use of certain classes of hierarchic models can increase the performance of this algorithm. We then present linear time algorithms for the automatic construction of these hierarchic models, using the detailed model and extra information about cost of probing points and invertibility of components
Redesign of technical systems
The paper describes a systematic approach to support the redesign process. Redesign is the adaptation of a technical system in order to meet new specifications. The approach presented is based on techniques developed in model-based diagnosis research. The essence of the approach is to find the part of the system which causes the discrepancy between a formal specification of the system to be designed and the description of the existing technical system. Furthermore, new specifications are generated, describing the new behaviour for the `faulty¿ part. These specifications guide the actual design of this part. Both the specification and design description are based on YMIR, an ontology for structuring engineering design knowledge
Recommended from our members
Explanation-based learning for diagnosis
Diagnostic expert systems constructed using traditional knowledge-engineering techniques identify malfunctioning components using rules that associate symptoms with diagnoses. Model-based diagnosis (MBD) systems use models of devices to find faults given observations of abnormal behavior. These approaches to diagnosis are complementary. We consider hybrid diagnosis systems that include both associational and model-based diagnostic components. We present results on explanation-based learning (EBL) methods aimed at improving the performance of hybrid diagnostic problem solvers. We describe two architectures called EBL_IA and EBL(p). EBL_IA is a form fo "learning in advance" that pre-compiles models into associations. At run-time the diagnostic system is purely associational. In EBL(p), the run-time diagnosis system contains associational, MBD, and EBL components. Learned associational rules are preferred but when they are incomplete they may produce too many incorrect diagnoses. When errors cause performance to dip below a give threshold p, EBL(p) activates MBD and explanation-based "learning while doing". We present results of empirical studies comparing MBD without learning versus EBL_IA and EBL(p). The main conclusions are as follows. EBL_IA is superior when it is feasible but it is not feasible for large devices. EBL(p) can speed-up MBD and scale-up to larger devices in situations where perfect accuracy is not required
Improving performance through concept formation and conceptual clustering
Research from June 1989 through October 1992 focussed on concept formation, clustering, and supervised learning for purposes of improving the efficiency of problem-solving, planning, and diagnosis. These projects resulted in two dissertations on clustering, explanation-based learning, and means-ends planning, and publications in conferences and workshops, several book chapters, and journals; a complete Bibliography of NASA Ames supported publications is included. The following topics are studied: clustering of explanations and problem-solving experiences; clustering and means-end planning; and diagnosis of space shuttle and space station operating modes
Do We Really Sample Right In Model-Based Diagnosis?
Statistical samples, in order to be representative, have to be drawn from a
population in a random and unbiased way. Nevertheless, it is common practice in
the field of model-based diagnosis to make estimations from (biased) best-first
samples. One example is the computation of a few most probable possible fault
explanations for a defective system and the use of these to assess which aspect
of the system, if measured, would bring the highest information gain.
In this work, we scrutinize whether these statistically not well-founded
conventions, that both diagnosis researchers and practitioners have adhered to
for decades, are indeed reasonable. To this end, we empirically analyze various
sampling methods that generate fault explanations. We study the
representativeness of the produced samples in terms of their estimations about
fault explanations and how well they guide diagnostic decisions, and we
investigate the impact of sample size, the optimal trade-off between sampling
efficiency and effectivity, and how approximate sampling techniques compare to
exact ones
Real-time value-driven diagnosis
Diagnosis is often thought of as an isolated task in theoretical reasoning (reasoning with the goal of updating our beliefs about the world). We present a decision-theoretic interpretation of diagnosis as a task in practical reasoning (reasoning with the goal of acting in the world), and sketch components of our approach to this task. These components include an abstract problem description, a decision-theoretic model of the basic task, a set of inference methods suitable for evaluating the decision representation in real-time, and a control architecture to provide the needed continuing coordination between the agent and its environment. A principal contribution of this work is the representation and inference methods we have developed, which extend previously available probabilistic inference methods and narrow, somewhat, the gap between probabilistic and logical models of diagnosis