19 research outputs found
A two-phase method for extracting explanatory arguments from Bayesian networks
Errors in reasoning about probabilistic evidence can have severe consequences. In the legal domain a number of recent miscarriages of justice emphasises how severe these consequences can be. These cases, in which forensic evidence was misinterpreted, have ignited a scientific debate on how and when probabilistic reasoning can be incorporated in (legal) argumentation. One promising approach is to use Bayesian networks (BNs), which are well-known scientific models for probabilistic reasoning. For non-statistical experts, however, Bayesian networks may be hard to interpret. Especially since the inner workings of Bayesian networks are complicated, they may appear as black box models. Argumentation models, on the contrary, can be used to show how certain results are derived in a way that naturally corresponds to everyday reasoning. In this paper we propose to explain the inner workings of a BN in terms of arguments. We formalise a two-phase method for extracting probabilistically supported arguments from a Bayesian network. First, from a Bayesian network we construct a support graph, and, second, given a set of observations we build arguments from that support graph. Such arguments can facilitate the correct interpretation and explanation of the relation between hypotheses and evidence that is modelled in the Bayesian network
Qualitative approaches to quantifying probabilistic networks
A probabilistic network consists of a graphical representation (a directed graph) of the important variables in a
domain of application, and the relationships between them, together with a joint probability distribution over the
variables. A probabilistic network allows for computing any probability of interest. The joint probability
distribution factorises into conditional probability distributions such that for each variable represented in the graph
a distribution is specified conditional on all possible combinations of the variable's parents in the graph. Even for
a moderate sized probabilistic network, thousands of probabilities need to be specified. Often the only source of
probabilistic information is the knowledge and experience of experts. People, even experts, are known not be
very good at assessing probabilities, and often dislike expressing their estimates as numbers. To overcome this
problem, we propose two qualitative approaches to quantifying probabilistic networks. The first approach is
abstracting away from probabilities by using qualitative probabilistic networks. The second approach is to allow
the use of verbal expressions of probability during elicitation. In qualitative probabilistic networks, the arcs of the
directed graph are augmented with signs: `+',`-', `0', and `?', indicating the direction of shift in probability for the
variable at one end of the arc, given a shift in values of the variable at the other end of the arc. For example, a
positive influence of variable A on variable B indicates that higher values for B become more likely given higher
values for A. Qualitative probabilistic networks allow for reasoning with probabilistic networks in a qualitative
way, thereby enabling us to check the robustness of the network's structure before probabilities are assessed. In
addition, the qualitative signs provide constraints on the probabilities to be elicited. Qualitative networks are,
however, not very expressive and therefore easily result in uninformative answers (`?'s) during reasoning. We will
suggest several refinements of the formalism of qualitative probabilistic networks that enhance their
expressiveness and applicability. To make probability elicitation easier on experts, we allow them to state verbal
probability expressions, such as "probable" and "impossible", as well as numbers. To this end, we have
augmented a vertical probability elicitation scale with verbal expressions. These expressions, and their position
on the scale, are the result of several studies we conducted. The scale, together with other ingredients such as
text-fragments describing the probability to be assessed and grouping of the probabilities that should sum to 1, is
used in a newly designed probability elicitation method. The method provides for the elicitation of initial rough
assessments. Assessments for which the outcome of the network is very sensitive can be refined using additional
experts and/or the more conventional elicitation methods. Our method has been used with two experts in
oncology in the construction of a probabilistic network for oesophageal carcinoma and allows us to elicit a large
number of probabilities in little time. The experts felt comfortable with the method and evaluations of the resulting
network have shown that it performs quite well with the rough assessments
Positive dependence in qualitative probabilistic networks
Qualitative probabilistic networks (QPNs) combine the conditional
independence assumptions of Bayesian networks with the qualitative properties
of positive and negative dependence. They formalise various intuitive
properties of positive dependence to allow inferences over a large network of
variables. However, we will demonstrate in this paper that, due to an incorrect
symmetry property, many inferences obtained in non-binary QPNs are not
mathematically true. We will provide examples of such incorrect inferences and
briefly discuss possible resolutions.Comment: 10 pages, 3 figure
Modeling time-varying uncertain situations using Dynamic Influence Nets
AbstractThis paper enhances the Timed Influence Nets (TIN) based formalism to model uncertainty in dynamic situations. The enhancements enable a system modeler to specify persistence and time-varying influences in a dynamic situation that the existing TIN fails to capture. The new class of models is named Dynamic Influence Nets (DIN). Both TIN and DIN provide an alternative easy-to-read and compact representation to several time-based probabilistic reasoning paradigms including Dynamic Bayesian Networks. The Influence Net (IN) based approach has its origin in the Discrete Event Systems modeling. The time delays on arcs and nodes represent the communication and processing delays, respectively, while the changes in the probability of an event at different time instants capture the uncertainty associated with the occurrence of the event over a period of time
Surprise: An Alternative Qualitative Uncertainty Model
This dissertation embodies a study of the concept of surprise as a base for constructing qualitative calculi for representing and reasoning about uncertain knowledge. Two functions are presented, kappa++} and z, which construct qualitative ranks for events by obtaining the order of magnitude abstraction of the degree of surprise associated with them. The functions use natural numbers to classify events based their associated surprise and aim at providing a ranking that improves those provided by existing ranking functions. This in turn enables the use of such functions in an a la carte probabilistic system where one can choose the level of detail required to represent uncertain knowledge depending on the requirements of the application. The proposed ranking functions are defined along with surprise-update models associated with them. The reasoning mechanisms associated with the functions are developed mathematically and graphically. The advantages and expected limitations of both functions are compared with respect to each other and with existing ranking functions in the context of a bioinformatics application known as \u27\u27reverse engineering of genetic regulatory networks\u27\u27 in which the relations among various genetic components are discovered through the examination of a large amount of collected data. The ranking functions are examined in this context via graphical models which are exclusively developed or this purpose and which utilize the developed functions to represent uncertain knowledge at various levels of details
Recommended from our members
A quantum probability account of individual differences in causal reasoning
We use quantum probability (QP) theory to investigate individual differences in causal reasoning. By analyzing data sets from Rehder (2014) on comparative judgments, and from Rehder & Waldmann (2016) on absolute judgments, we show that a QP model can both account for individual differences in causal judgments, and why these judgments sometimes violate the properties of causal Bayes nets. We implement this and previously proposed models of causal reasoning (including classical probability models) within the same hierarchical Bayesian inferential framework to provide a detailed comparison between these models, including computing Bayes factors. Analysis of the inferred parameters of the QP model illustrates how these can be interpreted in terms of putative cognitive mechanisms of causal reasoning. Additionally, we implement a latent classification mechanism that identifies subcategories of reasoners based on properties of the inferred cognitive process, rather than post hoc clustering. The QP model also provides a parsimonious explanation for aggregate behavior, which alternatively can only be explained by a mixture of multiple existing models. Investigating individual differences through the lens of a QP model reveals simple but strong alternatives to existing explanations for the dichotomies often observed in how people make causal inferences. These alternative explanations arise from the cognitive interpretation of the parameters and structure of the quantum probability model
Inference in distributed multiagent reasoning systems in cooperation with artificial neural networks
This research is motivated by the need to support inference in intelligent decision
support systems offered by multi-agent, distributed intelligent systems involving
uncertainty. Probabilistic reasoning with graphical models, known as Bayesian
networks (BN) or belief networks, has become an active field of research and practice
in artificial intelligence, operations research, and statistics in the last two decades.
At present, a BN is used primarily as a stand-alone system. In case of a large
problem scope, the large network slows down inference process and is difficult to
review or revise. When the problem itself is distributed, domain knowledge and
evidence has to be centralized and unified before a single BN can be created for the
problem.
Alternatively, separate BNs describing related subdomains or different aspects
of the same domain may be created, but it is difficult to combine them for problem
solving, even if the interdependency relations are available. This issue has been
investigated in several works, including most notably Multiply Sectioned BNs (MSBNs)
by Xiang [Xiang93]. MSBNs provide a highly modular and efficient framework
for uncertain reasoning in multi-agent distributed systems.
Inspired by the success of BNs under the centralized and single-agent paradigm,
a MSBN representation formalism under the distributed and multi-agent paradigm
has been developed. This framework allows the distributed representation of uncertain
knowledge on a large and complex environment to be embedded in multiple
cooperative agents and effective, exact, and distributed probabilistic inference.
What a Bayesian network is, how inference can be done in a Bayesian network
under the single-agent paradigm, how multiple agents’ diverse knowledge on
a complex environment can be structured as a set of coherent probabilistic graphical
models, how these models can be transformed into graphical structures that
support message passing, and how message passing can be performed to accomplish
tasks in model compilation and distributed inference are covered in details in this
thesis