5,855 research outputs found
Bayesian Network Structure Learning with Permutation Tests
In literature there are several studies on the performance of Bayesian
network structure learning algorithms. The focus of these studies is almost
always the heuristics the learning algorithms are based on, i.e. the
maximisation algorithms (in score-based algorithms) or the techniques for
learning the dependencies of each variable (in constraint-based algorithms). In
this paper we investigate how the use of permutation tests instead of
parametric ones affects the performance of Bayesian network structure learning
from discrete data. Shrinkage tests are also covered to provide a broad
overview of the techniques developed in current literature.Comment: 13 pages, 4 figures. Presented at the Conference 'Statistics for
Complex Problems', Padova, June 15, 201
Entropy inference and the James-Stein estimator, with application to nonlinear gene association networks
We present a procedure for effective estimation of entropy and mutual
information from small-sample data, and apply it to the problem of inferring
high-dimensional gene association networks. Specifically, we develop a
James-Stein-type shrinkage estimator, resulting in a procedure that is highly
efficient statistically as well as computationally. Despite its simplicity, we
show that it outperforms eight other entropy estimation procedures across a
diverse range of sampling scenarios and data-generating models, even in cases
of severe undersampling. We illustrate the approach by analyzing E. coli gene
expression data and computing an entropy-based gene-association network from
gene expression data. A computer program is available that implements the
proposed shrinkage estimator.Comment: 18 pages, 3 figures, 1 tabl
Bayesian Deep Net GLM and GLMM
Deep feedforward neural networks (DFNNs) are a powerful tool for functional
approximation. We describe flexible versions of generalized linear and
generalized linear mixed models incorporating basis functions formed by a DFNN.
The consideration of neural networks with random effects is not widely used in
the literature, perhaps because of the computational challenges of
incorporating subject specific parameters into already complex models.
Efficient computational methods for high-dimensional Bayesian inference are
developed using Gaussian variational approximation, with a parsimonious but
flexible factor parametrization of the covariance matrix. We implement natural
gradient methods for the optimization, exploiting the factor structure of the
variational covariance matrix in computation of the natural gradient. Our
flexible DFNN models and Bayesian inference approach lead to a regression and
classification method that has a high prediction accuracy, and is able to
quantify the prediction uncertainty in a principled and convenient way. We also
describe how to perform variable selection in our deep learning method. The
proposed methods are illustrated in a wide range of simulated and real-data
examples, and the results compare favourably to a state of the art flexible
regression and classification method in the statistical literature, the
Bayesian additive regression trees (BART) method. User-friendly software
packages in Matlab, R and Python implementing the proposed methods are
available at https://github.com/VBayesLabComment: 35 pages, 7 figure, 10 table
Bayesian optimization of the PC algorithm for learning Gaussian Bayesian networks
The PC algorithm is a popular method for learning the structure of Gaussian
Bayesian networks. It carries out statistical tests to determine absent edges
in the network. It is hence governed by two parameters: (i) The type of test,
and (ii) its significance level. These parameters are usually set to values
recommended by an expert. Nevertheless, such an approach can suffer from human
bias, leading to suboptimal reconstruction results. In this paper we consider a
more principled approach for choosing these parameters in an automatic way. For
this we optimize a reconstruction score evaluated on a set of different
Gaussian Bayesian networks. This objective is expensive to evaluate and lacks a
closed-form expression, which means that Bayesian optimization (BO) is a
natural choice. BO methods use a model to guide the search and are hence able
to exploit smoothness properties of the objective surface. We show that the
parameters found by a BO method outperform those found by a random search
strategy and the expert recommendation. Importantly, we have found that an
often overlooked statistical test provides the best over-all reconstruction
results
Bayesian Compression for Deep Learning
Compression and computational efficiency in deep learning have become a
problem of great significance. In this work, we argue that the most principled
and effective way to attack this problem is by adopting a Bayesian point of
view, where through sparsity inducing priors we prune large parts of the
network. We introduce two novelties in this paper: 1) we use hierarchical
priors to prune nodes instead of individual weights, and 2) we use the
posterior uncertainties to determine the optimal fixed point precision to
encode the weights. Both factors significantly contribute to achieving the
state of the art in terms of compression rates, while still staying competitive
with methods designed to optimize for speed or energy efficiency.Comment: Published as a conference paper at NIPS 201
- …