41,406 research outputs found

    Shootout-89: A Comparative Evaluation of Knowledge-based Systems that Forecast Severe Weather

    Full text link
    During the summer of 1989, the Forecast Systems Laboratory of the National Oceanic and Atmospheric Administration sponsored an evaluation of artificial intelligence-based systems that forecast severe convective storms. The evaluation experiment, called Shootout-89, took place in Boulder, and focussed on storms over the northeastern Colorado foothills and plains (Moninger, et al., 1990). Six systems participated in Shootout-89. These included traditional expert systems, an analogy-based system, and a system developed using methods from the cognitive science/judgment analysis tradition. Each day of the exercise, the systems generated 2 to 9 hour forecasts of the probabilities of occurrence of: non significant weather, significant weather, and severe weather, in each of four regions in northeastern Colorado. A verification coordinator working at the Denver Weather Service Forecast Office gathered ground-truth data from a network of observers. Systems were evaluated on the basis of several measures of forecast skill, and on other metrics such as timeliness, ease of learning, and ease of use. Systems were generally easy to operate, however the various systems required substantially different levels of meteorological expertise on the part of their users--reflecting the various operational environments for which the systems had been designed. Systems varied in their statistical behavior, but on this difficult forecast problem, the systems generally showed a skill approximately equal to that of persistence forecasts and climatological (historical frequency) forecasts. The two systems that appeared best able to discriminate significant from non significant weather events were traditional expert systems. Both of these systems required the operator to make relatively sophisticated meteorological judgments. We are unable, based on only one summer's worth of data, to determine the extent to which the greater skill of the two systems was due to the content of their knowledge bases, or to the subjective judgments of the operator. A follow-on experiment, Shootout-91, is currently being planned. Interested potential participants are encouraged to contact the author at the address above.Comment: Appears in Proceedings of the Fifth Conference on Uncertainty in Artificial Intelligence (UAI1989

    Diagnosis of Multiple Faults: A Sensitivity Analysis

    Full text link
    We compare the diagnostic accuracy of three diagnostic inference models: the simple Bayes model, the multimembership Bayes model, which is isomorphic to the parallel combination function in the certainty-factor model, and a model that incorporates the noisy OR-gate interaction. The comparison is done on 20 clinicopathological conference (CPC) cases from the American Journal of Medicine-challenging cases describing actual patients often with multiple disorders. We find that the distributions produced by the noisy OR model agree most closely with the gold-standard diagnoses, although substantial differences exist between the distributions and the diagnoses. In addition, we find that the multimembership Bayes model tends to significantly overestimate the posterior probabilities of diseases, whereas the simple Bayes model tends to significantly underestimate the posterior probabilities. Our results suggest that additional work to refine the noisy OR model for internal medicine will be worthwhile.Comment: Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993

    Multiplicative Factorization of Noisy-Max

    Full text link
    The noisy-or and its generalization noisy-max have been utilized to reduce the complexity of knowledge acquisition. In this paper, we present a new representation of noisy-max that allows for efficient inference in general Bayesian networks. Empirical studies show that our method is capable of computing queries in well-known large medical networks, QMR-DT and CPCS, for which no previous exact inference method has been shown to perform well.Comment: Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999

    Incremental Probabilistic Inference

    Full text link
    Propositional representation services such as truth maintenance systems offer powerful support for incremental, interleaved, problem-model construction and evaluation. Probabilistic inference systems, in contrast, have lagged behind in supporting this incrementality typically demanded by problem solvers. The problem, we argue, is that the basic task of probabilistic inference is typically formulated at too large a grain-size. We show how a system built around a smaller grain-size inference task can have the desired incrementality and serve as the basis for a low-level (propositional) probabilistic representation service.Comment: Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993

    Extending Term Subsumption systems for Uncertainty Management

    Full text link
    A major difficulty in developing and maintaining very large knowledge bases originates from the variety of forms in which knowledge is made available to the KB builder. The objective of this research is to bring together two complementary knowledge representation schemes: term subsumption languages, which represent and reason about defining characteristics of concepts, and proximate reasoning models, which deal with uncertain knowledge and data in expert systems. Previous works in this area have primarily focused on probabilistic inheritance. In this paper, we address two other important issues regarding the integration of term subsumption-based systems and approximate reasoning models. First, we outline a general architecture that specifies the interactions between the deductive reasoner of a term subsumption system and an approximate reasoner. Second, we generalize the semantics of terminological language so that terminological knowledge can be used to make plausible inferences. The architecture, combined with the generalized semantics, forms the foundation of a synergistic tight integration of term subsumption systems and approximate reasoning models.Comment: Appears in Proceedings of the Sixth Conference on Uncertainty in Artificial Intelligence (UAI1990

    Incremental Dynamic Construction of Layered Polytree Networks

    Full text link
    Certain classes of problems, including perceptual data understanding, robotics, discovery, and learning, can be represented as incremental, dynamically constructed belief networks. These automatically constructed networks can be dynamically extended and modified as evidence of new individuals becomes available. The main result of this paper is the incremental extension of the singly connected polytree network in such a way that the network retains its singly connected polytree structure after the changes. The algorithm is deterministic and is guaranteed to have a complexity of single node addition that is at most of order proportional to the number of nodes (or size) of the network. Additional speed-up can be achieved by maintaining the path information. Despite its incremental and dynamic nature, the algorithm can also be used for probabilistic inference in belief networks in a fashion similar to other exact inference algorithms.Comment: Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994

    A Graph-Theoretic Analysis of Information Value

    Full text link
    We derive qualitative relationships about the informational relevance of variables in graphical decision models based on a consideration of the topology of the models. Specifically, we identify dominance relations for the expected value of information on chance variables in terms of their position and relationships in influence diagrams. The qualitative relationships can be harnessed to generate nonnumerical procedures for ordering uncertain variables in a decision model by their informational relevance.Comment: Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996

    Using Potential Influence Diagrams for Probabilistic Inference and Decision Making

    Full text link
    The potential influence diagram is a generalization of the standard "conditional" influence diagram, a directed network representation for probabilistic inference and decision analysis [Ndilikilikesha, 1991]. It allows efficient inference calculations corresponding exactly to those on undirected graphs. In this paper, we explore the relationship between potential and conditional influence diagrams and provide insight into the properties of the potential influence diagram. In particular, we show how to convert a potential influence diagram into a conditional influence diagram, and how to view the potential influence diagram operations in terms of the conditional influence diagram.Comment: Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993

    Adaptive Importance Sampling for Estimation in Structured Domains

    Full text link
    Sampling is an important tool for estimating large, complex sums and integrals over high dimensional spaces. For instance, important sampling has been used as an alternative to exact methods for inference in belief networks. Ideally, we want to have a sampling distribution that provides optimal-variance estimators. In this paper, we present methods that improve the sampling distribution by systematically adapting it as we obtain information from the samples. We present a stochastic-gradient-descent method for sequentially updating the sampling distribution based on the direct minization of the variance. We also present other stochastic-gradient-descent methods based on the minimizationof typical notions of distance between the current sampling distribution and approximations of the target, optimal distribution. We finally validate and compare the different methods empirically by applying them to the problem of action evaluation in influence diagrams.Comment: Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000

    Sequential Thresholds: Context Sensitive Default Extensions

    Full text link
    Default logic encounters some conceptual difficulties in representing common sense reasoning tasks. We argue that we should not try to formulate modular default rules that are presumed to work in all or most circumstances. We need to take into account the importance of the context which is continuously evolving during the reasoning process. Sequential thresholding is a quantitative counterpart of default logic which makes explicit the role context plays in the construction of a non-monotonic extension. We present a semantic characterization of generic non-monotonic reasoning, as well as the instantiations pertaining to default logic and sequential thresholding. This provides a link between the two mechanisms as well as a way to integrate the two that can be beneficial to both.Comment: Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997
    • …
    corecore