2,055 research outputs found
Efficient inference for genetic association studies with multiple outcomes
Combined inference for heterogeneous high-dimensional data is critical in
modern biology, where clinical and various kinds of molecular data may be
available from a single study. Classical genetic association studies regress a
single clinical outcome on many genetic variants one by one, but there is an
increasing demand for joint analysis of many molecular outcomes and genetic
variants in order to unravel functional interactions. Unfortunately, most
existing approaches to joint modelling are either too simplistic to be powerful
or are impracticable for computational reasons. Inspired by Richardson et al.
(2010, Bayesian Statistics 9), we consider a sparse multivariate regression
model that allows simultaneous selection of predictors and associated
responses. As Markov chain Monte Carlo (MCMC) inference on such models can be
prohibitively slow when the number of genetic variants exceeds a few thousand,
we propose a variational inference approach which produces posterior
information very close to that of MCMC inference, at a much reduced
computational cost. Extensive numerical experiments show that our approach
outperforms popular variable selection methods and tailored Bayesian
procedures, dealing within hours with problems involving hundreds of thousands
of genetic variants and tens to hundreds of clinical or molecular outcomes
Fast Exact Bayesian Inference for Sparse Signals in the Normal Sequence Model
We consider exact algorithms for Bayesian inference with model selection
priors (including spike-and-slab priors) in the sparse normal sequence model.
Because the best existing exact algorithm becomes numerically unstable for
sample sizes over n=500, there has been much attention for alternative
approaches like approximate algorithms (Gibbs sampling, variational Bayes,
etc.), shrinkage priors (e.g. the Horseshoe prior and the Spike-and-Slab LASSO)
or empirical Bayesian methods. However, by introducing algorithmic ideas from
online sequential prediction, we show that exact calculations are feasible for
much larger sample sizes: for general model selection priors we reach n=25000,
and for certain spike-and-slab priors we can easily reach n=100000. We further
prove a de Finetti-like result for finite sample sizes that characterizes
exactly which model selection priors can be expressed as spike-and-slab priors.
The computational speed and numerical accuracy of the proposed methods are
demonstrated in experiments on simulated data, on a differential gene
expression data set, and to compare the effect of multiple hyper-parameter
settings in the beta-binomial prior. In our experimental evaluation we compute
guaranteed bounds on the numerical accuracy of all new algorithms, which shows
that the proposed methods are numerically reliable whereas an alternative based
on long division is not
Variational Dropout and the Local Reparameterization Trick
We investigate a local reparameterizaton technique for greatly reducing the
variance of stochastic gradients for variational Bayesian inference (SGVB) of a
posterior over model parameters, while retaining parallelizability. This local
reparameterization translates uncertainty about global parameters into local
noise that is independent across datapoints in the minibatch. Such
parameterizations can be trivially parallelized and have variance that is
inversely proportional to the minibatch size, generally leading to much faster
convergence. Additionally, we explore a connection with dropout: Gaussian
dropout objectives correspond to SGVB with local reparameterization, a
scale-invariant prior and proportionally fixed posterior variance. Our method
allows inference of more flexibly parameterized posteriors; specifically, we
propose variational dropout, a generalization of Gaussian dropout where the
dropout rates are learned, often leading to better models. The method is
demonstrated through several experiments
- …