74 research outputs found
Structure-based categorisation of Bayesian network parameters
Bayesian networks typically require thousands of probability para-meters for their specification, many of which are bound to be inaccurate. Know-ledge of the direction of change in an output probability of a network occasioned by changes in one or more of its parameters, i.e. the qualitative effect of parameter changes, has been shown to be useful both for parameter tuning and in pre-processing for inference in credal networks. In this paper we identify classes of parameter for which the qualitative effect on a given output of interest can be identified based upon graphical considerations
Bayesian Networks: Aspects of Approximate Inference
A Bayesian network can be used to model consisely the probabilistic knowledge with respect to a given problem domain. Such a network consists of an acyclic directed graph in which the nodes represent stochastic variables, supplemented with probabilities indicating the strength of the influences between neighbouring variables. A qualitative probabilistic network is an abstraction of a Bayesian network in which the probabilistic influences among the variables are modelled by means of signs. A non-monotonic influence between two variables is associated with the ambiguous sign '?', which indicates that the actual sign of the influence depends on the state of the network. The presence of such ambiguous signs is undesirable as it tends to lead to uninformative results upon inference. In each specific state of the network, however, the influence between two variables is unambiguous. Now to capture the current effect of the influence this thesis introduces the concept of situational sign. It is shown how situational signs can be used upon inference and how they are updated as the state of the network changes. By means of a real-life qualitative network in oncology it is demonstrated that the use of situational signs can effectively forestall uninformative results upon inference. The loopy-propagation algorithm provides for approximate inference with a Bayesian network. Upon loopy propagation errors may arise in the computed probabilities due to the presence of loops in the graph. This thesis indicates that two different types of error arise in the computed probabilities which are termed convergence errors and cycling errors. These types of error are investigated in more detail for the nodes with two or more incoming arcs from a loop and for the other loop nodes seperately. For nodes with two or more incoming arcs on the loop a general expression is derived for the error that is found in the probabilities computed for these nodes in a network in its prior state. This expression includes a weighting factor that is captured by the newly defined notion of quantitative parental synergy. For the other loop nodes, the effect of the cycling error on the decisiveness of the computed probabilities is analysed. More specifically, the over- or underconfidence of these approximations is linked to two concepts from qualitative probabilistic networks. The thesis concludes with an analysis of an algorithm for inference with undirected networks which is equivalent to the loopy-propagation algorithm. It is shown how, although in undirected networks all errors arise from the cycling of information, the convergence error is embedded in the algorithm
Bayesian networks: a combined tuning heuristic
One of the issues in tuning an output probability of a Bayesian network by changing multiple parameters is the relative amount of the individual parameter changes. In an existing heuristic parameters are tied such that their changes induce locally a maximal change of the tuned probability. This heuristic, however, may reduce the attainable values of the tuned probability considerably. In another existing heuristic parameters are tied such that they simultaneously change in the entire interval . The tuning range of this heuristic will in general be larger then the tuning range of the locally optimal heuristic. Disadvantage, however, is that knowledge of the local optimal change is not exploited. In this paper a heuristic is proposed that is locally optimal, yet covers the larger tuning range of the second heuristic. Preliminary experiments show that this heuristic is a promising alternative
On Minimum Elementary-Triplet Bases for Independence Relations
A semi-graphoid independence relation is a set of independence statements, called triplets, and is typically exponentially large in the number of variables involved. For concise representation of such a relation, a subset of its triplets is listed in a so-called basis; its other triplets are defined implicitly through a set of axioms. An elementary-triplet basis for this purpose consists of all elementary triplets of a relation. Such a basis however, may include redundant information. In this paper we provide two lower bounds on the size of an elementary-triplet basis in general and an upper bound on the size of a minimum elementary-triplet basis. We further specify the construction of an elementary-triplet basis of minimum size for restricted relations
On Minimum Elementary-Triplet Bases for Independence Relations
A semi-graphoid independence relation is a set of independence statements, called triplets, and is typically exponentially large in the number of variables involved. For concise representation of such a relation, a subset of its triplets is listed in a so-called basis; its other triplets are defined implicitly through a set of axioms. An elementary-triplet basis for this purpose consists of all elementary triplets of a relation. Such a basis however, may include redundant information. In this paper we provide two lower bounds on the size of an elementary-triplet basis in general and an upper bound on the size of a minimum elementary-triplet basis. We further specify the construction of an elementary-triplet basis of minimum size for restricted relations
- …