50,388 research outputs found
Sparse Probit Linear Mixed Model
Linear Mixed Models (LMMs) are important tools in statistical genetics. When
used for feature selection, they allow to find a sparse set of genetic traits
that best predict a continuous phenotype of interest, while simultaneously
correcting for various confounding factors such as age, ethnicity and
population structure. Formulated as models for linear regression, LMMs have
been restricted to continuous phenotypes. We introduce the Sparse Probit Linear
Mixed Model (Probit-LMM), where we generalize the LMM modeling paradigm to
binary phenotypes. As a technical challenge, the model no longer possesses a
closed-form likelihood function. In this paper, we present a scalable
approximate inference algorithm that lets us fit the model to high-dimensional
data sets. We show on three real-world examples from different domains that in
the setup of binary labels, our algorithm leads to better prediction accuracies
and also selects features which show less correlation with the confounding
factors.Comment: Published version, 21 pages, 6 figure
The Infinite Hierarchical Factor Regression Model
We propose a nonparametric Bayesian factor regression model that accounts for
uncertainty in the number of factors, and the relationship between factors. To
accomplish this, we propose a sparse variant of the Indian Buffet Process and
couple this with a hierarchical model over factors, based on Kingman's
coalescent. We apply this model to two problems (factor analysis and factor
regression) in gene-expression data analysis
Incremental Sparse Bayesian Ordinal Regression
Ordinal Regression (OR) aims to model the ordering information between
different data categories, which is a crucial topic in multi-label learning. An
important class of approaches to OR models the problem as a linear combination
of basis functions that map features to a high dimensional non-linear space.
However, most of the basis function-based algorithms are time consuming. We
propose an incremental sparse Bayesian approach to OR tasks and introduce an
algorithm to sequentially learn the relevant basis functions in the ordinal
scenario. Our method, called Incremental Sparse Bayesian Ordinal Regression
(ISBOR), automatically optimizes the hyper-parameters via the type-II maximum
likelihood method. By exploiting fast marginal likelihood optimization, ISBOR
can avoid big matrix inverses, which is the main bottleneck in applying basis
function-based algorithms to OR tasks on large-scale datasets. We show that
ISBOR can make accurate predictions with parsimonious basis functions while
offering automatic estimates of the prediction uncertainty. Extensive
experiments on synthetic and real word datasets demonstrate the efficiency and
effectiveness of ISBOR compared to other basis function-based OR approaches
- …