3,478 research outputs found
Learning Word Representations with Hierarchical Sparse Coding
We propose a new method for learning word representations using hierarchical
regularization in sparse coding inspired by the linguistic study of word
meanings. We show an efficient learning algorithm based on stochastic proximal
methods that is significantly faster than previous approaches, making it
possible to perform hierarchical sparse coding on a corpus of billions of word
tokens. Experiments on various benchmark tasks---word similarity ranking,
analogies, sentence completion, and sentiment analysis---demonstrate that the
method outperforms or is competitive with state-of-the-art methods. Our word
representations are available at
\url{http://www.ark.cs.cmu.edu/dyogatam/wordvecs/}
Approximate Gaussian Elimination for Laplacians: Fast, Sparse, and Simple
We show how to perform sparse approximate Gaussian elimination for Laplacian
matrices. We present a simple, nearly linear time algorithm that approximates a
Laplacian by a matrix with a sparse Cholesky factorization, the version of
Gaussian elimination for symmetric matrices. This is the first nearly linear
time solver for Laplacian systems that is based purely on random sampling, and
does not use any graph theoretic constructions such as low-stretch trees,
sparsifiers, or expanders. The crux of our analysis is a novel concentration
bound for matrix martingales where the differences are sums of conditionally
independent variables
JUNIPR: a Framework for Unsupervised Machine Learning in Particle Physics
In applications of machine learning to particle physics, a persistent
challenge is how to go beyond discrimination to learn about the underlying
physics. To this end, a powerful tool would be a framework for unsupervised
learning, where the machine learns the intricate high-dimensional contours of
the data upon which it is trained, without reference to pre-established labels.
In order to approach such a complex task, an unsupervised network must be
structured intelligently, based on a qualitative understanding of the data. In
this paper, we scaffold the neural network's architecture around a
leading-order model of the physics underlying the data. In addition to making
unsupervised learning tractable, this design actually alleviates existing
tensions between performance and interpretability. We call the framework
JUNIPR: "Jets from UNsupervised Interpretable PRobabilistic models". In this
approach, the set of particle momenta composing a jet are clustered into a
binary tree that the neural network examines sequentially. Training is
unsupervised and unrestricted: the network could decide that the data bears
little correspondence to the chosen tree structure. However, when there is a
correspondence, the network's output along the tree has a direct physical
interpretation. JUNIPR models can perform discrimination tasks, through the
statistically optimal likelihood-ratio test, and they permit visualizations of
discrimination power at each branching in a jet's tree. Additionally, JUNIPR
models provide a probability distribution from which events can be drawn,
providing a data-driven Monte Carlo generator. As a third application, JUNIPR
models can reweight events from one (e.g. simulated) data set to agree with
distributions from another (e.g. experimental) data set.Comment: 37 pages, 24 figure
Probabilistic Graphical Model Representation in Phylogenetics
Recent years have seen a rapid expansion of the model space explored in
statistical phylogenetics, emphasizing the need for new approaches to
statistical model representation and software development. Clear communication
and representation of the chosen model is crucial for: (1) reproducibility of
an analysis, (2) model development and (3) software design. Moreover, a
unified, clear and understandable framework for model representation lowers the
barrier for beginners and non-specialists to grasp complex phylogenetic models,
including their assumptions and parameter/variable dependencies.
Graphical modeling is a unifying framework that has gained in popularity in
the statistical literature in recent years. The core idea is to break complex
models into conditionally independent distributions. The strength lies in the
comprehensibility, flexibility, and adaptability of this formalism, and the
large body of computational work based on it. Graphical models are well-suited
to teach statistical models, to facilitate communication among phylogeneticists
and in the development of generic software for simulation and statistical
inference.
Here, we provide an introduction to graphical models for phylogeneticists and
extend the standard graphical model representation to the realm of
phylogenetics. We introduce a new graphical model component, tree plates, to
capture the changing structure of the subgraph corresponding to a phylogenetic
tree. We describe a range of phylogenetic models using the graphical model
framework and introduce modules to simplify the representation of standard
components in large and complex models. Phylogenetic model graphs can be
readily used in simulation, maximum likelihood inference, and Bayesian
inference using, for example, Metropolis-Hastings or Gibbs sampling of the
posterior distribution
- …