212 research outputs found
A Simple Language Model based on PMI Matrix Approximations
In this study, we introduce a new approach for learning language models by
training them to estimate word-context pointwise mutual information (PMI), and
then deriving the desired conditional probabilities from PMI at test time.
Specifically, we show that with minor modifications to word2vec's algorithm, we
get principled language models that are closely related to the well-established
Noise Contrastive Estimation (NCE) based language models. A compelling aspect
of our approach is that our models are trained with the same simple negative
sampling objective function that is commonly used in word2vec to learn word
embeddings.Comment: Accepted to EMNLP 201
Mistake-Driven Learning in Text Categorization
Learning problems in the text processing domain often map the text to a space
whose dimensions are the measured features of the text, e.g., its words. Three
characteristic properties of this domain are (a) very high dimensionality, (b)
both the learned concepts and the instances reside very sparsely in the feature
space, and (c) a high variation in the number of active features in an
instance. In this work we study three mistake-driven learning algorithms for a
typical task of this nature -- text categorization. We argue that these
algorithms -- which categorize documents by learning a linear separator in the
feature space -- have a few properties that make them ideal for this domain. We
then show that a quantum leap in performance is achieved when we further modify
the algorithms to better address some of the specific characteristics of the
domain. In particular, we demonstrate (1) how variation in document length can
be tolerated by either normalizing feature weights or by using negative
weights, (2) the positive effect of applying a threshold range in training, (3)
alternatives in considering feature frequency, and (4) the benefits of
discarding features while training. Overall, we present an algorithm, a
variation of Littlestone's Winnow, which performs significantly better than any
other algorithm tested on this task using a similar feature set.Comment: 9 pages, uses aclap.st
Similarity-Based Models of Word Cooccurrence Probabilities
In many applications of natural language processing (NLP) it is necessary to
determine the likelihood of a given word combination. For example, a speech
recognizer may need to determine which of the two word combinations ``eat a
peach'' and ``eat a beach'' is more likely. Statistical NLP methods determine
the likelihood of a word combination from its frequency in a training corpus.
However, the nature of language is such that many word combinations are
infrequent and do not occur in any given corpus. In this work we propose a
method for estimating the probability of such previously unseen word
combinations using available information on ``most similar'' words.
We describe probabilistic word association models based on distributional
word similarity, and apply them to two tasks, language modeling and pseudo-word
disambiguation. In the language modeling task, a similarity-based model is used
to improve probability estimates for unseen bigrams in a back-off language
model. The similarity-based method yields a 20% perplexity improvement in the
prediction of unseen bigrams and statistically significant reductions in
speech-recognition error.
We also compare four similarity-based estimation methods against back-off and
maximum-likelihood estimation methods on a pseudo-word sense disambiguation
task in which we controlled for both unigram and bigram frequency to avoid
giving too much weight to easy-to-disambiguate high-frequency configurations.
The similarity-based methods perform up to 40% better on this particular task.Comment: 26 pages, 5 figure
- …