6,449 research outputs found
Just Another Gibbs Additive Modeller: Interfacing JAGS and mgcv
The BUGS language offers a very flexible way of specifying complex
statistical models for the purposes of Gibbs sampling, while its JAGS variant
offers very convenient R integration via the rjags package. However, including
smoothers in JAGS models can involve some quite tedious coding, especially for
multivariate or adaptive smoothers. Further, if an additive smooth structure is
required then some care is needed, in order to centre smooths appropriately,
and to find appropriate starting values. R package mgcv implements a wide range
of smoothers, all in a manner appropriate for inclusion in JAGS code, and
automates centring and other smooth setup tasks. The purpose of this note is to
describe an interface between mgcv and JAGS, based around an R function,
`jagam', which takes a generalized additive model (GAM) as specified in mgcv
and automatically generates the JAGS model code and data required for inference
about the model via Gibbs sampling. Although the auto-generated JAGS code can
be run as is, the expectation is that the user would wish to modify it in order
to add complex stochastic model components readily specified in JAGS. A simple
interface is also provided for visualisation and further inference about the
estimated smooth components using standard mgcv functionality. The methods
described here will be un-necessarily inefficient if all that is required is
fully Bayesian inference about a standard GAM, rather than the full flexibility
of JAGS. In that case the BayesX package would be more efficient.Comment: Submitted to the Journal of Statistical Softwar
Representation Learning: A Review and New Perspectives
The success of machine learning algorithms generally depends on data
representation, and we hypothesize that this is because different
representations can entangle and hide more or less the different explanatory
factors of variation behind the data. Although specific domain knowledge can be
used to help design representations, learning with generic priors can also be
used, and the quest for AI is motivating the design of more powerful
representation-learning algorithms implementing such priors. This paper reviews
recent work in the area of unsupervised feature learning and deep learning,
covering advances in probabilistic models, auto-encoders, manifold learning,
and deep networks. This motivates longer-term unanswered questions about the
appropriate objectives for learning good representations, for computing
representations (i.e., inference), and the geometrical connections between
representation learning, density estimation and manifold learning
- …