41 research outputs found
ON THE RATIONAL SCOPE OF PROBABILISTIC RULE-BASED INFERENCE SYSTEMS
Belief updating schemes in artificial intelligence may be viewed as three
dimensional languages, consisting of a syntax (e.g. probabilities or certainty
factors), a calculus (e.g. Bayesian or CF combination rules), and a semantics
(i.e. cognitive interpretations of competing formalisms). This paper studies
the rational scope of those languages on the syntax and calculus grounds. In
particular, the paper presents an endomorphism theorem which highlights
the limitations imposed by the conditional independence assumptions
implicit in the CF calculus. Implications of the theorem to the relationship
between the CF and the Bayesian languages and the Dempster-Shafer theory
of evidence are presented. The paper concludes with a discussion of some
implications on rule-based knowledge engineering in uncertain domains.Information Systems Working Papers Serie
Ratio-Scale Elicitation of Degrees of Support
During the last decade, the computational paradigms known as inflzcence diagrams
and belief networks have become to dominate the diagnostic expert systems field.
Using elaborate collections of nodes and arcs, these representations describe how propositions
of interest interact with each other through a variety of causal and predictive links.
The links are parameterized with inexact degrees of support, typically expressed as subjective
conditional probabilities or likelihood ratios. To date, most of the research in this
area has focused on developing efficient belief-revision calculi to support decision making
under uncertainty. Taking a different perspective, this paper focuses on the inputs of these
calculi, i.e. on the human-supplied degrees of support which provide the currency of the
belief revision process. Traditional methods for eliciting subjective probability functions
are of little use in rule-based settings, where propositions of interest represent causally related
and mostly discrete random variables. We describe ratio-scale and graphical methods
for (i) eliciting degrees of support from human experts in a credible manner, and (ii) transforming
them into the conditional probabilities and likelihood-ratios required by standard
belief revision algorithms. As a secondary contribution, the paper offers a new graphical
justification to eigenvector techniques for smoothing subjective answers to pair-wise
elicitation questions.Information Systems Working Papers Serie
COMPARING THE VALIDITY OF ALTERNATIVE BELIEF LANGUAGES: AN EXPERIMENTAL APPROACH
The problem of modeling uncertainty and inexact reasoning in
rule-based expert systems is challenging on nonnative as well on
cognitive grounds. First, the modular structure of the rule-based
architecture does not lend itself to standard Bayesian
inference techniques. Second, there is no consensus on how to
model human (expert) judgement under uncertainty. These factors
have led to a proliferation of quasi-probabilistic belief calculi
which are widely-used in practice. This paper investigates the
descriptive and external validity of three well-known "belief
languages:" the Bayesian, ad-hoc Bayesian, and the certainty
factors languages. These models are implemented in many
commercial expert system shells, and their validity is clearly an
important issue for users and designers of expert systems. The
methodology consists of a controlled, within-subject experiment
designed to measure the relative performance of alternative
belief languages. The experiment pits the judgement of human
experts with the recommendations generated by their simulated
expert systems, each using a different belief language. Special
emphasis is given to the general issues of validating belief
languages and expert systems at large.Information Systems Working Papers Serie
RATIO-SCALE ELICITATION OF DEGREES OF BELIEF
Most research on rule-based inference under uncertainty has
focused on the normative validity and efficiency of various
belief-update algorithms. In this paper we shift the attention
to the inputs of these algorithms, namely, to the degrees of
beliefs elicited from domain experts. Classical methods for
eliciting continuous probability functions are of little use in a
rule-based model, where propositions of interest are taken to be
causally related and, typically, discrete, random variables. We
take the position that the numerical encoding of degrees of
belief in such propositions is somewhat analogous to the
measurement of physical stimuli like brightness, weight, and
distance. With that in mind, we base our elicitation techniques
on statements regarding the relative likelihoods of various clues
and hypotheses. We propose a formal procedure designed to (a)
elicit such inputs in a credible manner, and, (b) transform them
into the conditional probabilities and likelihood-ratios required
by Bayesian inference systems.Information Systems Working Papers Serie
AN INTUITIVE INTERPRETATION OF THE THEORY OF EVIDENCE IN THE CONTEXT OF BIBLIOGRAPHICAL INDEXING
Models of bibliographical Indexing concern the construction of effective keyword
taxonomies and the representation of relevance between document s and
keywords. The theory of evidence concerns the elicitation and manipulation of
degrees of belief rendered by multiple sources of evidence to a common set of
propositions. The paper presents a formal framework in which adaptive taxonomies
and probabilistic indexing are induced dynamically by the relevance
opinions of the library's patrons. Different measures of relevance and mechanisms
for combining them are presented and shown to be isomorphic to the
belief functions and combination rules of the theory of evidence. The paper
thus has two objectives: (i) to treat formally slippery concepts like probabilistic
indexing and average relevance, and (ii) to provide an intuitive justification
to the Dempster Shafer theory of evidence, using bibliographical indexing as a
canonical example.Information Systems Working Papers Serie
PROLOG META-INTERPRETERS FOR RULE-BASED INFERENCE UNDER UNCERTAINTY
Uncertain facts and inexact rules can be represented and
processed in standard Prolog through meta-interpretation. This
requires the specification of appropriate parsers and belief
calculi. We present a meta-interpreter that takes a rule-based
belief calculus as an external variable. The certainty-factors
calculus and a heuristic Bayesian belief-update model are then
implemented as stand-alone Prolog predicates. These, in turn,
are bound to the meta-interpreter environment through second-order
programming. The resulting system is a powerful
experimental tool which enables inquiry into the impact of
various designs of belief calculi on the external validity of
expert systems. The paper also demonstrates the (well-known)
role of Prolog meta-interpreters in building expert system
shells.Information Systems Working Papers Serie
MULTILAYER FEEDFORWARD NETWORKS WITH NON-POLYNOMIAL ACTIVATION FUNCTIONS CAN APPROXIMATE ANY FUNCTION
Several researchers characterized the activation functions under which multilayer feedforward
networks can act as universal approximators. We show that all the characterizations
that were reported thus far in the literature ark special cases of the following general result:
a standard multilayer feedforward network can approximate any continuous function
to any degree of accuracy if and only if the network's activation functions are not polynomial.
We also emphasize the important role of the threshold, asserting that without it the
last theorem doesn't hold.Information Systems Working Papers Serie
A DEMPSTER-SHAFER MODEL OF RELEVANCE
We present a model for representing relevance and classification decisions of
multiple catalogers in the context of a hierarchical bibliographical database.
The model is based on the Dempster-Shafer theory of evidence. Concepts
like ambiguous relevance, inexact classification, and pooled classification, are
discussed using the nomenclature of belief functions and Dempster's rule.
The model thus gives a normative framework in which one can describe and
address many problematic phenomena which characterize the way people
classify and retrieve documents.Information Systems Working Papers Serie