73,002 research outputs found
An Agent Architecture for Knowledge Discovery and Evolution
The abductive theory of method (ATOM) was recently proposed to describe the process that scientists use for knowledge discovery. In this paper we propose an agent architecture for knowledge discovery and evolution (KDE) based on ATOM. The agent incorporates a combination of ontologies, rules and Bayesian networks for representing different aspects of its internal knowledge. The agent uses an external AI service to detect unexpected situations from incoming observations. It then uses rules to analyse the current situation and a Bayesian network for finding plausible explanations for unexpected situations. The architecture is evaluated and analysed on a use case application for monitoring daily household electricity consumption patterns
Bayesian Discovery of Multiple Bayesian Networks via Transfer Learning
Bayesian network structure learning algorithms with limited data are being
used in domains such as systems biology and neuroscience to gain insight into
the underlying processes that produce observed data. Learning reliable networks
from limited data is difficult, therefore transfer learning can improve the
robustness of learned networks by leveraging data from related tasks. Existing
transfer learning algorithms for Bayesian network structure learning give a
single maximum a posteriori estimate of network models. Yet, many other models
may be equally likely, and so a more informative result is provided by Bayesian
structure discovery. Bayesian structure discovery algorithms estimate posterior
probabilities of structural features, such as edges. We present transfer
learning for Bayesian structure discovery which allows us to explore the shared
and unique structural features among related tasks. Efficient computation
requires that our transfer learning objective factors into local calculations,
which we prove is given by a broad class of transfer biases. Theoretically, we
show the efficiency of our approach. Empirically, we show that compared to
single task learning, transfer learning is better able to positively identify
true edges. We apply the method to whole-brain neuroimaging data.Comment: 10 page
Learning All Credible Bayesian Network Structures for Model Averaging
A Bayesian network is a widely used probabilistic graphical model with
applications in knowledge discovery and prediction. Learning a Bayesian network
(BN) from data can be cast as an optimization problem using the well-known
score-and-search approach. However, selecting a single model (i.e., the best
scoring BN) can be misleading or may not achieve the best possible accuracy. An
alternative to committing to a single model is to perform some form of Bayesian
or frequentist model averaging, where the space of possible BNs is sampled or
enumerated in some fashion. Unfortunately, existing approaches for model
averaging either severely restrict the structure of the Bayesian network or
have only been shown to scale to networks with fewer than 30 random variables.
In this paper, we propose a novel approach to model averaging inspired by
performance guarantees in approximation algorithms. Our approach has two
primary advantages. First, our approach only considers credible models in that
they are optimal or near-optimal in score. Second, our approach is more
efficient and scales to significantly larger Bayesian networks than existing
approaches.Comment: under review by JMLR. arXiv admin note: substantial text overlap with
arXiv:1811.0503
- …