12 research outputs found

    ARTIFICIAL INTELLIGENCE DIALECTS OF THE BAYESIAN BELIEF REVISION LANGUAGE

    Get PDF
    Rule-based expert systems must deal with uncertain data, subjective expert opinions, and inaccurate decision rules. Computer scientists and psychologists have proposed and implemented a number of belief languages widely used in applied systems, and their normative validity is clearly an important question, both on practical as well on theoretical grounds. Several well-know belief languages are reviewed, and both previous work and new insights into their Bayesian interpretations are presented. In particular, the authors focus on three alternative belief-update models the certainty factors calculus, Dempster-Shafer simple support functions, and the descriptive contrast/inertia model. Important "dialectsâ of these languages are shown to be isomorphic to each other and to a special case of Bayesian inference. Parts of this analysis were carried out by other authors; these results were extended and consolidated using an analytic technique designed to study the kinship of belief languages in general.Information Systems Working Papers Serie

    A method of classification for multisource data in remote sensing based on interval-valued probabilities

    Get PDF
    An axiomatic approach to intervalued (IV) probabilities is presented, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach representation of statistical evidence and combination of multiple bodies of evidence are emphasized. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. The development of decision rules over IV probabilities is discussed from the viewpoint of statistical pattern recognition. The proposed method, so called evidential reasoning method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data, Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor. In each case a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a divide-and-combine process, the method is able to utilize more features than the conventional maximum likelihood method

    Generalized Probabilistic Reasoning and Empirical Studies on Computational Efficiency and Scalability

    Get PDF
    Expert Systems are tools that can be very useful for diagnostic purposes, however current methods of storing and reasoning with knowledge have significant limitations. One set of limitations involves how to store and manipulate uncertain knowledge: much of the knowledge we are dealing with has some degree of uncertainty. These limitations include lack of complete information, not being able to model cyclic information and limitations on the size and complexity of the problems to be solved. If expert systems are ever going to be able to tackle significant real world problems then these deficiencies must be corrected. This paper describes a new method of reasoning with uncertain knowledge which improves the computational efficiency as well as scalability over current methods. The cornerstone of this method involves incorporating and exploiting information about the structure of the knowledge representation to reduce the problem size and complexity. Additionally, a new knowledge representation is discussed that will further increase the capability of expert systems to model a wider variety of real world problems. Finally, benchmarking studies of the new algorithm against the old have led to insights into the graph structure of very large knowledge bases

    Method of Classification for Multisource Data in Remote Sensing Based on Interval-VaIued Probabilities

    Get PDF
    This work was supported by NASA Grant No. NAGW-925 “Earth Observation Research - Using Multistage EOS-Iike Data” (Principal lnvestigators: David A. Landgrebe and Chris Johannsen). The Anderson River SAR/MSS data set was acquired, preprocessed, and loaned to us by the Canada Centre for Remote Sensing, Department of Energy Mines, and Resources, of the Government of Canada. The importance of utilizing multisource data in ground-cover^ classification lies in the fact that improvements in classification accuracy can be achieved at the expense of additional independent features provided by separate sensors. However, it should be recognized that information and knowledge from most available data sources in the real world are neither certain nor complete. We refer to such a body of uncertain, incomplete, and sometimes inconsistent information as “evidential information.” The objective of this research is to develop a mathematical framework within which various applications can be made with multisource data in remote sensing and geographic information systems. The methodology described in this report has evolved from “evidential reasoning,” where each data source is considered as providing a body of evidence with a certain degree of belief. The degrees of belief based on the body of evidence are represented by “interval-valued (IV) probabilities” rather than by conventional point-valued probabilities so that uncertainty can be embedded in the measures. There are three fundamental problems in the muItisource data analysis based on IV probabilities: (1) how to represent bodies of evidence by IV probabilities, (2) how to combine IV probabilities to give an overall assessment of the combined body of evidence, and (3) how to make a decision when the statistical evidence is given by IV probabilities. This report first introduces an axiomatic approach to IV probabilities, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach the report focuses on representation of statistical evidence by IV probabilities and combination of multiple bodies of evidence. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. This report also focuses on the development of decision rules over IV probabilities from the viewpoint of statistical pattern recognition The proposed method, so called “evidential reasoning” method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data* Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor, in each case, a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a Divide-and-Combine process, the method is able to utilize more features than the conventional Maximum Likelihood method

    ARTIFICIAL INTELLIGENCE DIALECTS OF THE BAYESIAN BELIEF REVISION LANGUAGE

    Get PDF
    Rule-based expert systems must deal with uncertain data, subjective expert opinions, and inaccurate decision rules. Computer scientists and psychologists have proposed and implemented a number of belief languages widely used in applied systems, and their normative validity is clearly an important question, both on practical as well on theoretical grounds. Several well-know belief languages are reviewed, and both previous work and new insights into their Bayesian interpretations are presented. In particular, the authors focus on three alternative belief-update models the certainty factors calculus, Dempster-Shafer simple support functions, and the descriptive contrast/inertia model. Important "dialectsâ of these languages are shown to be isomorphic to each other and to a special case of Bayesian inference. Parts of this analysis were carried out by other authors; these results were extended and consolidated using an analytic technique designed to study the kinship of belief languages in general.Information Systems Working Papers Serie

    A blackboard-based system for learning to identify images from feature data

    Get PDF
    A blackboard-based system which learns recognition rules for objects from a set of training examples, and then identifies and locates these objects in test images, is presented. The system is designed to use data from a feature matcher developed at R.S.R.E. Malvern which finds the best matches for a set of feature patterns in an image. The feature patterns are selected to correspond to typical object parts which occur with relatively consistent spatial relationships and are sufficient to distinguish the objects to be identified from one another. The learning element of the system develops two separate sets of rules, one to identify possible object instances and the other to attach probabilities to them. The search for possible object instances is exhaustive; its scale is not great enough for pruning to be necessary. Separate probabilities are established empirically for all combinations of features which could represent object instances. As accurate probabilities cannot be obtained from a set of preselected training examples, they are updated by feedback from the recognition process. The incorporation of rule induction and feedback into the blackboard system is achieved by treating the induced rules as data to be held on a secondary blackboard. The single recognition knowledge source effectively contains empty rules which this data can be slotted into, allowing it to be used to recognise any number of objects - there is no need to develop a separate knowledge source for each object. Additional object-specific background information to aid identification can be added by the user in the form of background checks to be carried out on candidate objects. The system has been tested using synthetic data, and successfully identified combinations of geometric shapes (squares, triangles etc.). Limited tests on photographs of vehicles travelling along a main road were also performed successfully

    Author index—Volumes 1–89

    Get PDF

    Tratamento de imprecisão em sistemas especialistas

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia de Produção, Florianópolis, 1991.Esta dissertação apresenta um levantamento do estado da arte no Tratamento de Imprecisão em Sistemas Especialistas. Aborda-se o Raciocínio Humano na Resolução de Problemas e as principais técnicas existentes em tratamento de imprecisão em Inteligência Artificial: Método Bayesiano, Fatores de Certeza, Teoria da Evidência de Dempster e Shafer e Teoria dos Conjuntos Difusos. Para cada uma das técnicas estudadas são apresentados seus fundamentos teóricos, exemplos práticos e uma discussão sobre a performance entre as técnicas em relação aos principais requerimentos a uma técnica ideal no tratamento de imprecisão em Sistemas Especialistas
    corecore