102,742 research outputs found
New Techniques for Learning Parameters in Bayesian Networks.
PhDOne of the hardest challenges in building a realistic Bayesian network (BN) model is
to construct the node probability tables (NPTs). Even with a fixed predefined model
structure and very large amounts of relevant data, machine learning methods do not
consistently achieve great accuracy compared to the ground truth when learning the
NPT entries (parameters). Hence, it is widely believed that incorporating expert judgment
or related domain knowledge can improve the parameter learning accuracy. This
is especially true in the sparse data situation. Expert judgments come in many forms.
In this thesis we focus on expert judgment that specifies inequality or equality relationships
among variables. Related domain knowledge is data that comes from a different
but related problem.
By exploiting expert judgment and related knowledge, this thesis makes novel
contributions to improve the BN parameter learning performance, including:
• The multinomial parameter learning model with interior constraints (MPL-C)
and exterior constraints (MPL-EC). This model itself is an auxiliary BN, which
encodes the multinomial parameter learning process and constraints elicited from
the expert judgments.
• The BN parameter transfer learning (BNPTL) algorithm. Given some potentially
related (source) BNs, this algorithm automatically explores the most relevant
source BN and BN fragments, and fuses the selected source and target parameters
in a robust way.
• A generic BN parameter learning framework. This framework uses both expert
judgments and transferred knowledge to improve the learning accuracy. This
framework transfers the mined data statistics from the source network as the parameter
priors of the target network.
Experiments based on the BNs from a well-known repository as well as two realworld
case studies using different data sample sizes demonstrate that the proposed new
approaches can achieve much greater learning accuracy compared to other state-of-theart
methods with relatively sparse data.China Scholarship Counci
Learning Bayesian network equivalence classes using ant colony optimisation
Bayesian networks have become an indispensable tool in the modelling of uncertain
knowledge. Conceptually, they consist of two parts: a directed acyclic graph called the
structure, and conditional probability distributions attached to each node known as the
parameters. As a result of their expressiveness, understandability and rigorous mathematical basis, Bayesian networks have become one of the first methods investigated,
when faced with an uncertain problem domain. However, a recurring problem persists
in specifying a Bayesian network. Both the structure and parameters can be difficult for
experts to conceive, especially if their knowledge is tacit.To counteract these problems, research has been ongoing, on learning both the structure
and parameters of Bayesian networks from data. Whilst there are simple methods for
learning the parameters, learning the structure has proved harder. Part ofthis stems from
the NP-hardness of the problem and the super-exponential space of possible structures.
To help solve this task, this thesis seeks to employ a relatively new technique, that has
had much success in tackling NP-hard problems. This technique is called ant colony
optimisation. Ant colony optimisation is a metaheuristic based on the behaviour of ants
acting together in a colony. It uses the stochastic activity of artificial ants to find good
solutions to combinatorial optimisation problems. In the current work, this method is
applied to the problem of searching through the space of equivalence classes of Bayesian
networks, in order to find a good match against a set of data. The system uses operators
that evaluate potential modifications to a current state. Each of the modifications is
scored and the results used to inform the search. In order to facilitate these steps, other
techniques are also devised, to speed up the learning process. The techniques includeThe techniques are tested by sampling data from gold standard networks and learning
structures from this sampled data. These structures are analysed using various goodnessof-fit measures to see how well the algorithms perform. The measures include structural
similarity metrics and Bayesian scoring metrics. The results are compared in depth
against systems that also use ant colony optimisation and other methods, including
evolutionary programming and greedy heuristics. Also, comparisons are made to well
known state-of-the-art algorithms and a study performed on a real-life data set. The
results show favourable performance compared to the other methods and on modelling
the real-life data
Learning Bayesian networks based on optimization approaches
Learning accurate classifiers from preclassified data is a very active research topic in machine learning and artifcial intelligence. There are numerous classifier paradigms, among which Bayesian Networks are very effective and well known in domains with uncertainty. Bayesian Networks are widely used representation frameworks for reasoning with probabilistic information. These models use graphs to capture dependence and independence relationships between feature variables, allowing a concise representation of the knowledge as well as efficient graph based query processing algorithms. This representation is defined by two components: structure learning and parameter learning. The structure of this model represents a directed acyclic graph. The nodes in the graph correspond to the feature variables in the domain, and the arcs (edges) show the causal relationships between feature variables. A directed edge relates the variables so that the variable corresponding to the terminal node (child) will be conditioned on the variable corresponding to the initial node (parent). The parameter learning represents probabilities and conditional probabilities based on prior information or past experience. The set of probabilities are represented in the conditional probability table. Once the network structure is constructed, the probabilistic inferences are readily calculated, and can be performed to predict the outcome of some variables based on the observations of others. However, the problem of structure learning is a complex problem since the number of candidate structures grows exponentially when the number of feature variables increases. This thesis is devoted to the development of learning structures and parameters in Bayesian Networks. Different models based on optimization techniques are introduced to construct an optimal structure of a Bayesian Network. These models also consider the improvement of the Naive Bayes' structure by developing new algorithms to alleviate the independence assumptions. We present various models to learn parameters of Bayesian Networks; in particular we propose optimization models for the Naive Bayes and the Tree Augmented Naive Bayes by considering different objective functions. To solve corresponding optimization problems in Bayesian Networks, we develop new optimization algorithms. Local optimization methods are introduced based on the combination of the gradient and Newton methods. It is proved that the proposed methods are globally convergent and have superlinear convergence rates. As a global search we use the global optimization method, AGOP, implemented in the open software library GANSO. We apply the proposed local methods in the combination with AGOP. Therefore, the main contributions of this thesis include (a) new algorithms for learning an optimal structure of a Bayesian Network; (b) new models for learning the parameters of Bayesian Networks with the given structures; and finally (c) new optimization algorithms for optimizing the proposed models in (a) and (b). To validate the proposed methods, we conduct experiments across a number of real world problems. Print version is available at: http://library.federation.edu.au/record=b1804607~S4Doctor of Philosoph
A practical Bayesian framework for backpropagation networks
A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible (1) objective comparisons between solutions using alternative network architectures, (2) objective stopping rules for network pruning or growing procedures, (3) objective choice of magnitude and type of weight decay terms or additive regularizers (for penalizing large weights, etc.), (4) a measure of the effective number of well-determined parameters in a model, (5) quantified estimates of the error bars on network parameters and on network output, and (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian "evidence" automatically embodies "Occam's razor," penalizing overflexible and overcomplex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalization ability and the Bayesian evidence is obtained
Probabilistic Methodology and Techniques for Artefact Conception and Development
The purpose of this paper is to make a state of the art on probabilistic methodology and techniques for artefact conception and development. It is the 8th deliverable of the BIBA (Bayesian Inspired Brain and Artefacts) project. We first present the incompletness problem as the central difficulty that both living creatures and artefacts have to face: how can they perceive, infer, decide and act efficiently with incomplete and uncertain knowledge?. We then introduce a generic probabilistic formalism called Bayesian Programming. This formalism is then used to review the main probabilistic methodology
and techniques. This review is organized in 3 parts: first the probabilistic models from Bayesian networks to Kalman filters and from sensor fusion to CAD systems, second the inference techniques and finally the learning and model acquisition and comparison methodologies. We conclude with the perspectives of the BIBA project as they rise from this state of the art
Learning the structure of Bayesian Networks: A quantitative assessment of the effect of different algorithmic schemes
One of the most challenging tasks when adopting Bayesian Networks (BNs) is
the one of learning their structure from data. This task is complicated by the
huge search space of possible solutions, and by the fact that the problem is
NP-hard. Hence, full enumeration of all the possible solutions is not always
feasible and approximations are often required. However, to the best of our
knowledge, a quantitative analysis of the performance and characteristics of
the different heuristics to solve this problem has never been done before.
For this reason, in this work, we provide a detailed comparison of many
different state-of-the-arts methods for structural learning on simulated data
considering both BNs with discrete and continuous variables, and with different
rates of noise in the data. In particular, we investigate the performance of
different widespread scores and algorithmic approaches proposed for the
inference and the statistical pitfalls within them
- …