7 research outputs found
An Axiomatic Framework for Propagating Uncertainty in Directed Acyclic Networks
This paper presents an axiomatic system for propagating uncertainty in Pearl's causal
networks, (Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference,
1988 [7]). The main objective is to study all aspects of knowledge representation
and reasoning in causal networks from an abstract point of view, independent of the
particular theory being used to represent information (probabilities, belief functions or
upper and lower probabilities). This is achieved by expressing concepts and algorithms
in terms of valuations, an abstract mathematical concept representing a piece of
information, introduced by Shenoy and Sharer [1, 2]. Three new axioms are added to
Shenoy and Shafer's axiomatic framework [1, 2], for the propagation of general
valuations in hypertrees. These axioms allow us to address from an abstract point of
view concepts such as conditional information (a generalization of conditional probabilities)
and give rules relating the decomposition of global information with the concept of
independence (a generalization of probability rules allowing the decomposition of a
bidimensional distribution with independent marginals in the product of its two
marginals). Finally, Pearl's propagation algorithms are also developed and expressed in
terms of operations with valuations.Commission of the European Communities
under ESPRIT BRA 3085: DRUM
ARTIFICIAL INTELLIGENCE DIALECTS OF THE BAYESIAN BELIEF REVISION LANGUAGE
Rule-based expert systems must deal with uncertain data,
subjective expert opinions, and inaccurate decision rules. Computer scientists
and psychologists have proposed and implemented a number of belief languages widely used in applied systems, and their normative validity is clearly an important question, both on practical as well on theoretical grounds. Several well-know belief languages are reviewed, and both previous work and new insights into their Bayesian interpretations are presented. In
particular, the authors focus on three alternative belief-update models the
certainty factors calculus, Dempster-Shafer simple support functions, and
the descriptive contrast/inertia model. Important "dialectsâ of these
languages are shown to be isomorphic to each other and to a special case of
Bayesian inference. Parts of this analysis were carried out by other authors; these results were extended and consolidated using an analytic technique designed to study the kinship of belief languages in general.Information Systems Working Papers Serie
ARTIFICIAL INTELLIGENCE DIALECTS OF THE BAYESIAN BELIEF REVISION LANGUAGE
Rule-based expert systems must deal with uncertain data,
subjective expert opinions, and inaccurate decision rules. Computer scientists
and psychologists have proposed and implemented a number of belief languages widely used in applied systems, and their normative validity is clearly an important question, both on practical as well on theoretical grounds. Several well-know belief languages are reviewed, and both previous work and new insights into their Bayesian interpretations are presented. In
particular, the authors focus on three alternative belief-update models the
certainty factors calculus, Dempster-Shafer simple support functions, and
the descriptive contrast/inertia model. Important "dialectsâ of these
languages are shown to be isomorphic to each other and to a special case of
Bayesian inference. Parts of this analysis were carried out by other authors; these results were extended and consolidated using an analytic technique designed to study the kinship of belief languages in general.Information Systems Working Papers Serie
Modeling qualitative judgements in Bayesian networks
PhDAlthough Bayesian Networks (BNs) are increasingly being used to solve real world
problems [47], their use is still constrained by the difficulty of constructing the node
probability tables (NPTs). A key challenge is to construct relevant NPTs using the
minimal amount of expert elicitation, recognising that it is rarely cost-effective to elicit
complete sets of probability values.
This thesis describes an approach to defining NPTs for a large class of commonly
occurring nodes called ranked nodes. This approach is based on the doubly truncated
Normal distribution with a central tendency that is invariably a type of a weighted
function of the parent nodes.
We demonstrate through two examples how to build large probability tables using
the ranked nodes approach. Using this approach we are able to build the large probability
tables needed to capture the complex models coming from assessing firm's risks in the
safety or finance sector.
The aim of the first example with the National Air-Traffic Services(NATS) is to
show that using this approach we can model the impact of the organisational factors
in avoiding mid-air aircraft collisions. The resulting model was validated by NATS and
helped managers to assess the efficiency of the company handling risks and thus, control
the likelihood of air-traffic incidents. In the second example, we use BN models to capture
the operational risk (OpRisk) in financial institutions. The novelty of this approach is
the use of causal reasoning as a means to reduce the uncertainty surrounding this type of
risk. This model was validated against the Basel framework [160], which is the emerging
international standard regulation governing how financial institutions assess OpRisks.EPSRC funded SCORE project (Sensing
Changes in Operational Risk Exposure)
Data Fusion for Materials Location Estimation in Construction
Effective automated tracking and locating of the thousands of materials on construction sites improves material distribution and project performance and thus has a significant positive impact on construction productivity. Many locating technologies and data sources have therefore been developed, and the deployment of a cost-effective, scalable, and easy-to-implement materials location sensing system at actual construction sites has very recently become both technically and economically feasible. However, considerable opportunity still exists to improve the accuracy, precision, and robustness of such systems. The quest for fundamental methods that can take advantage of the relative strengths of each individual technology and data source motivated this research, which has led to the development of new data fusion methods for improving materials location estimation.
In this study a data fusion model is used to generate an integrated solution for the automated identification, location estimation, and relocation detection of construction materials. The developed model is a modified functional data fusion model. Particular attention is paid to noisy environments where low-cost RFID tags are attached to all materials, which are sometimes moved repeatedly around the site. A portion of the work focuses partly on relocation detection because it is closely coupled with location estimation and because it can be used to detect the multi-handling of materials, which is a key indicator of inefficiency.
This research has successfully addressed the challenges of fusing data from multiple sources of information in a very noisy and dynamic environment. The results indicate potential for the proposed model to improve location estimation and movement detection as well as to automate the calculation of the incidence of multi-handling
Recommended from our members
Constructing Probability Boxes and Dempster-Shafer Structures
This report summarizes a variety of the most useful and commonly applied methods for obtaining Dempster-Shafer structures, and their mathematical kin probability boxes, from empirical information or theoretical knowledge. The report includes a review of the aggregation methods for handling agreement and conflict when multiple such objects are obtained from different sources