3,937 research outputs found
Training Restricted Boltzmann Machines on Word Observations
The restricted Boltzmann machine (RBM) is a flexible tool for modeling
complex data, however there have been significant computational difficulties in
using RBMs to model high-dimensional multinomial observations. In natural
language processing applications, words are naturally modeled by K-ary discrete
distributions, where K is determined by the vocabulary size and can easily be
in the hundreds of thousands. The conventional approach to training RBMs on
word observations is limited because it requires sampling the states of K-way
softmax visible units during block Gibbs updates, an operation that takes time
linear in K. In this work, we address this issue by employing a more general
class of Markov chain Monte Carlo operators on the visible units, yielding
updates with computational complexity independent of K. We demonstrate the
success of our approach by training RBMs on hundreds of millions of word
n-grams using larger vocabularies than previously feasible and using the
learned features to improve performance on chunking and sentiment
classification tasks, achieving state-of-the-art results on the latter
A Deep Embedding Model for Co-occurrence Learning
Co-occurrence Data is a common and important information source in many
areas, such as the word co-occurrence in the sentences, friends co-occurrence
in social networks and products co-occurrence in commercial transaction data,
etc, which contains rich correlation and clustering information about the
items. In this paper, we study co-occurrence data using a general energy-based
probabilistic model, and we analyze three different categories of energy-based
model, namely, the , and models, which are able to capture
different levels of dependency in the co-occurrence data. We also discuss how
several typical existing models are related to these three types of energy
models, including the Fully Visible Boltzmann Machine (FVBM) (), Matrix
Factorization (), Log-BiLinear (LBL) models (), and the Restricted
Boltzmann Machine (RBM) model (). Then, we propose a Deep Embedding Model
(DEM) (an model) from the energy model in a \emph{principled} manner.
Furthermore, motivated by the observation that the partition function in the
energy model is intractable and the fact that the major objective of modeling
the co-occurrence data is to predict using the conditional probability, we
apply the \emph{maximum pseudo-likelihood} method to learn DEM. In consequence,
the developed model and its learning method naturally avoid the above
difficulties and can be easily used to compute the conditional probability in
prediction. Interestingly, our method is equivalent to learning a special
structured deep neural network using back-propagation and a special sampling
strategy, which makes it scalable on large-scale datasets. Finally, in the
experiments, we show that the DEM can achieve comparable or better results than
state-of-the-art methods on datasets across several application domains
Efficient Learning for Undirected Topic Models
Replicated Softmax model, a well-known undirected topic model, is powerful in
extracting semantic representations of documents. Traditional learning
strategies such as Contrastive Divergence are very inefficient. This paper
provides a novel estimator to speed up the learning based on Noise Contrastive
Estimate, extended for documents of variant lengths and weighted inputs.
Experiments on two benchmarks show that the new estimator achieves great
learning efficiency and high accuracy on document retrieval and classification.Comment: Accepted by ACL-IJCNLP 2015 short paper. 6 page
Inducing Features of Random Fields
We present a technique for constructing random fields from a set of training
samples. The learning paradigm builds increasingly complex fields by allowing
potential functions, or features, that are supported by increasingly large
subgraphs. Each feature has a weight that is trained by minimizing the
Kullback-Leibler divergence between the model and the empirical distribution of
the training data. A greedy algorithm determines how features are incrementally
added to the field and an iterative scaling algorithm is used to estimate the
optimal values of the weights.
The statistical modeling techniques introduced in this paper differ from
those common to much of the natural language processing literature since there
is no probabilistic finite state or push-down automaton on which the model is
built. Our approach also differs from the techniques common to the computer
vision literature in that the underlying random fields are non-Markovian and
have a large number of parameters that must be estimated. Relations to other
learning approaches including decision trees and Boltzmann machines are given.
As a demonstration of the method, we describe its application to the problem of
automatic word classification in natural language processing.
Key words: random field, Kullback-Leibler divergence, iterative scaling,
divergence geometry, maximum entropy, EM algorithm, statistical learning,
clustering, word morphology, natural language processingComment: 34 pages, compressed postscrip
Representation Learning: A Review and New Perspectives
The success of machine learning algorithms generally depends on data
representation, and we hypothesize that this is because different
representations can entangle and hide more or less the different explanatory
factors of variation behind the data. Although specific domain knowledge can be
used to help design representations, learning with generic priors can also be
used, and the quest for AI is motivating the design of more powerful
representation-learning algorithms implementing such priors. This paper reviews
recent work in the area of unsupervised feature learning and deep learning,
covering advances in probabilistic models, auto-encoders, manifold learning,
and deep networks. This motivates longer-term unanswered questions about the
appropriate objectives for learning good representations, for computing
representations (i.e., inference), and the geometrical connections between
representation learning, density estimation and manifold learning
- β¦