research

COMPARING THE VALIDITY OF ALTERNATIVE BELIEF LANGUAGES: AN EXPERIMENTAL APPROACH

Abstract

The problem of modeling uncertainty and inexact reasoning in rule-based expert systems is challenging on nonnative as well on cognitive grounds. First, the modular structure of the rule-based architecture does not lend itself to standard Bayesian inference techniques. Second, there is no consensus on how to model human (expert) judgement under uncertainty. These factors have led to a proliferation of quasi-probabilistic belief calculi which are widely-used in practice. This paper investigates the descriptive and external validity of three well-known "belief languages:" the Bayesian, ad-hoc Bayesian, and the certainty factors languages. These models are implemented in many commercial expert system shells, and their validity is clearly an important issue for users and designers of expert systems. The methodology consists of a controlled, within-subject experiment designed to measure the relative performance of alternative belief languages. The experiment pits the judgement of human experts with the recommendations generated by their simulated expert systems, each using a different belief language. Special emphasis is given to the general issues of validating belief languages and expert systems at large.Information Systems Working Papers Serie

    Similar works