33,795 research outputs found
Learning the Structure of Deep Sparse Graphical Models
Deep belief networks are a powerful way to model complex probability
distributions. However, learning the structure of a belief network,
particularly one with hidden units, is difficult. The Indian buffet process has
been used as a nonparametric Bayesian prior on the directed structure of a
belief network with a single infinitely wide hidden layer. In this paper, we
introduce the cascading Indian buffet process (CIBP), which provides a
nonparametric prior on the structure of a layered, directed belief network that
is unbounded in both depth and width, yet allows tractable inference. We use
the CIBP prior with the nonlinear Gaussian belief network so each unit can
additionally vary its behavior between discrete and continuous representations.
We provide Markov chain Monte Carlo algorithms for inference in these belief
networks and explore the structures learned on several image data sets.Comment: 20 pages, 6 figures, AISTATS 2010, Revise
Homological Neural Networks: A Sparse Architecture for Multivariate Complexity
The rapid progress of Artificial Intelligence research came with the
development of increasingly complex deep learning models, leading to growing
challenges in terms of computational complexity, energy efficiency and
interpretability. In this study, we apply advanced network-based information
filtering techniques to design a novel deep neural network unit characterized
by a sparse higher-order graphical architecture built over the homological
structure of underlying data. We demonstrate its effectiveness in two
application domains which are traditionally challenging for deep learning:
tabular data and time series regression problems. Results demonstrate the
advantages of this novel design which can tie or overcome the results of
state-of-the-art machine learning and deep learning models using only a
fraction of parameters
Learning to Discover Sparse Graphical Models
We consider structure discovery of undirected graphical models from
observational data. Inferring likely structures from few examples is a complex
task often requiring the formulation of priors and sophisticated inference
procedures. Popular methods rely on estimating a penalized maximum likelihood
of the precision matrix. However, in these approaches structure recovery is an
indirect consequence of the data-fit term, the penalty can be difficult to
adapt for domain-specific knowledge, and the inference is computationally
demanding. By contrast, it may be easier to generate training samples of data
that arise from graphs with the desired structure properties. We propose here
to leverage this latter source of information as training data to learn a
function, parametrized by a neural network that maps empirical covariance
matrices to estimated graph structures. Learning this function brings two
benefits: it implicitly models the desired structure or sparsity properties to
form suitable priors, and it can be tailored to the specific problem of edge
structure discovery, rather than maximizing data likelihood. Applying this
framework, we find our learnable graph-discovery method trained on synthetic
data generalizes well: identifying relevant edges in both synthetic and real
data, completely unknown at training time. We find that on genetics, brain
imaging, and simulation data we obtain performance generally superior to
analytical methods
Representation Learning: A Review and New Perspectives
The success of machine learning algorithms generally depends on data
representation, and we hypothesize that this is because different
representations can entangle and hide more or less the different explanatory
factors of variation behind the data. Although specific domain knowledge can be
used to help design representations, learning with generic priors can also be
used, and the quest for AI is motivating the design of more powerful
representation-learning algorithms implementing such priors. This paper reviews
recent work in the area of unsupervised feature learning and deep learning,
covering advances in probabilistic models, auto-encoders, manifold learning,
and deep networks. This motivates longer-term unanswered questions about the
appropriate objectives for learning good representations, for computing
representations (i.e., inference), and the geometrical connections between
representation learning, density estimation and manifold learning
- …