107,205 research outputs found
Marginal likelihoods in phylogenetics: a review of methods and applications
By providing a framework of accounting for the shared ancestry inherent to
all life, phylogenetics is becoming the statistical foundation of biology. The
importance of model choice continues to grow as phylogenetic models continue to
increase in complexity to better capture micro and macroevolutionary processes.
In a Bayesian framework, the marginal likelihood is how data update our prior
beliefs about models, which gives us an intuitive measure of comparing model
fit that is grounded in probability theory. Given the rapid increase in the
number and complexity of phylogenetic models, methods for approximating
marginal likelihoods are increasingly important. Here we try to provide an
intuitive description of marginal likelihoods and why they are important in
Bayesian model testing. We also categorize and review methods for estimating
marginal likelihoods of phylogenetic models, highlighting several recent
methods that provide well-behaved estimates. Furthermore, we review some
empirical studies that demonstrate how marginal likelihoods can be used to
learn about models of evolution from biological data. We discuss promising
alternatives that can complement marginal likelihoods for Bayesian model
choice, including posterior-predictive methods. Using simulations, we find one
alternative method based on approximate-Bayesian computation (ABC) to be
biased. We conclude by discussing the challenges of Bayesian model choice and
future directions that promise to improve the approximation of marginal
likelihoods and Bayesian phylogenetics as a whole.Comment: 33 pages, 3 figure
Efficient Localized Inference for Large Graphical Models
We propose a new localized inference algorithm for answering marginalization
queries in large graphical models with the correlation decay property. Given a
query variable and a large graphical model, we define a much smaller model in a
local region around the query variable in the target model so that the marginal
distribution of the query variable can be accurately approximated. We introduce
two approximation error bounds based on the Dobrushin's comparison theorem and
apply our bounds to derive a greedy expansion algorithm that efficiently guides
the selection of neighbor nodes for localized inference. We verify our
theoretical bounds on various datasets and demonstrate that our localized
inference algorithm can provide fast and accurate approximation for large
graphical models
Distribution-Based Categorization of Classifier Transfer Learning
Transfer Learning (TL) aims to transfer knowledge acquired in one problem,
the source problem, onto another problem, the target problem, dispensing with
the bottom-up construction of the target model. Due to its relevance, TL has
gained significant interest in the Machine Learning community since it paves
the way to devise intelligent learning models that can easily be tailored to
many different applications. As it is natural in a fast evolving area, a wide
variety of TL methods, settings and nomenclature have been proposed so far.
However, a wide range of works have been reporting different names for the same
concepts. This concept and terminology mixture contribute however to obscure
the TL field, hindering its proper consideration. In this paper we present a
review of the literature on the majority of classification TL methods, and also
a distribution-based categorization of TL with a common nomenclature suitable
to classification problems. Under this perspective three main TL categories are
presented, discussed and illustrated with examples
- …