7 research outputs found
Positive dependence in qualitative probabilistic networks
Qualitative probabilistic networks (QPNs) combine the conditional
independence assumptions of Bayesian networks with the qualitative properties
of positive and negative dependence. They formalise various intuitive
properties of positive dependence to allow inferences over a large network of
variables. However, we will demonstrate in this paper that, due to an incorrect
symmetry property, many inferences obtained in non-binary QPNs are not
mathematically true. We will provide examples of such incorrect inferences and
briefly discuss possible resolutions.Comment: 10 pages, 3 figure
Explaining inference on a population of independent agents using Bayesian networks
The main goal of this research is to design, implement, and evaluate a novel explanation method, the hierarchical explanation method (HEM), for explaining Bayesian network (BN) inference when the network is modeling a population of conditionally independent agents, each of which is modeled as a subnetwork. For example, consider disease-outbreak detection in which the agents are patients who are modeled as independent, conditioned on the factors that cause disease spread. Given evidence about these patients, such as their symptoms, suppose that the BN system infers that a respiratory anthrax outbreak is highly likely. A public-health official who received such a report would generally want to know why anthrax is being given a high posterior probability. The HEM explains such inferences. The explanation approach is applicable in general to inference on BNs that model conditionally independent agents; it complements previous approaches for explaining inference on BNs that model a single agent (e.g., for explaining the diagnostic inference for a single patient using a BN that models just that patient). The hypotheses that were tested are: (1) the proposed explanation method provides information that helps a user to understand how and why the inference results have been obtained, (2) the proposed explanation method helps to improve the quality of the inferences that users draw from evidence
New Techniques for Learning Parameters in Bayesian Networks.
PhDOne of the hardest challenges in building a realistic Bayesian network (BN) model is
to construct the node probability tables (NPTs). Even with a fixed predefined model
structure and very large amounts of relevant data, machine learning methods do not
consistently achieve great accuracy compared to the ground truth when learning the
NPT entries (parameters). Hence, it is widely believed that incorporating expert judgment
or related domain knowledge can improve the parameter learning accuracy. This
is especially true in the sparse data situation. Expert judgments come in many forms.
In this thesis we focus on expert judgment that specifies inequality or equality relationships
among variables. Related domain knowledge is data that comes from a different
but related problem.
By exploiting expert judgment and related knowledge, this thesis makes novel
contributions to improve the BN parameter learning performance, including:
• The multinomial parameter learning model with interior constraints (MPL-C)
and exterior constraints (MPL-EC). This model itself is an auxiliary BN, which
encodes the multinomial parameter learning process and constraints elicited from
the expert judgments.
• The BN parameter transfer learning (BNPTL) algorithm. Given some potentially
related (source) BNs, this algorithm automatically explores the most relevant
source BN and BN fragments, and fuses the selected source and target parameters
in a robust way.
• A generic BN parameter learning framework. This framework uses both expert
judgments and transferred knowledge to improve the learning accuracy. This
framework transfers the mined data statistics from the source network as the parameter
priors of the target network.
Experiments based on the BNs from a well-known repository as well as two realworld
case studies using different data sample sizes demonstrate that the proposed new
approaches can achieve much greater learning accuracy compared to other state-of-theart
methods with relatively sparse data.China Scholarship Counci
Qualitative approaches to quantifying probabilistic networks
A probabilistic network consists of a graphical representation (a directed graph) of the important variables in a
domain of application, and the relationships between them, together with a joint probability distribution over the
variables. A probabilistic network allows for computing any probability of interest. The joint probability
distribution factorises into conditional probability distributions such that for each variable represented in the graph
a distribution is specified conditional on all possible combinations of the variable's parents in the graph. Even for
a moderate sized probabilistic network, thousands of probabilities need to be specified. Often the only source of
probabilistic information is the knowledge and experience of experts. People, even experts, are known not be
very good at assessing probabilities, and often dislike expressing their estimates as numbers. To overcome this
problem, we propose two qualitative approaches to quantifying probabilistic networks. The first approach is
abstracting away from probabilities by using qualitative probabilistic networks. The second approach is to allow
the use of verbal expressions of probability during elicitation. In qualitative probabilistic networks, the arcs of the
directed graph are augmented with signs: `+',`-', `0', and `?', indicating the direction of shift in probability for the
variable at one end of the arc, given a shift in values of the variable at the other end of the arc. For example, a
positive influence of variable A on variable B indicates that higher values for B become more likely given higher
values for A. Qualitative probabilistic networks allow for reasoning with probabilistic networks in a qualitative
way, thereby enabling us to check the robustness of the network's structure before probabilities are assessed. In
addition, the qualitative signs provide constraints on the probabilities to be elicited. Qualitative networks are,
however, not very expressive and therefore easily result in uninformative answers (`?'s) during reasoning. We will
suggest several refinements of the formalism of qualitative probabilistic networks that enhance their
expressiveness and applicability. To make probability elicitation easier on experts, we allow them to state verbal
probability expressions, such as "probable" and "impossible", as well as numbers. To this end, we have
augmented a vertical probability elicitation scale with verbal expressions. These expressions, and their position
on the scale, are the result of several studies we conducted. The scale, together with other ingredients such as
text-fragments describing the probability to be assessed and grouping of the probabilities that should sum to 1, is
used in a newly designed probability elicitation method. The method provides for the elicitation of initial rough
assessments. Assessments for which the outcome of the network is very sensitive can be refined using additional
experts and/or the more conventional elicitation methods. Our method has been used with two experts in
oncology in the construction of a probabilistic network for oesophageal carcinoma and allows us to elicit a large
number of probabilities in little time. The experts felt comfortable with the method and evaluations of the resulting
network have shown that it performs quite well with the rough assessments
Enhancing QPNs for Trade-off Resolution
Qualitative probabilistic networks have been introduced as qualitative abstractions of Bayesian belief networks. One of the major drawbacks of these qualitative networks is their coarse level of detail, which may lead to unresolved trade-offs during inference. We present an enhanced formalism for qualitative networks with a finer level of detail. An enhanced qualitative probabilistic network differs from a regular qualitative network in that it distinguishes between strong and weak influences. Enhanced qualitative probabilistic networks are purely qualitative in nature, as regular qualitative networks are, yet allow for efficiently resolving trade-offs during inference