2,644 research outputs found
Skip-gram Language Modeling Using Sparse Non-negative Matrix Probability Estimation
We present a novel family of language model (LM) estimation techniques named
Sparse Non-negative Matrix (SNM) estimation. A first set of experiments
empirically evaluating it on the One Billion Word Benchmark shows that SNM
-gram LMs perform almost as well as the well-established Kneser-Ney (KN)
models. When using skip-gram features the models are able to match the
state-of-the-art recurrent neural network (RNN) LMs; combining the two modeling
techniques yields the best known result on the benchmark. The computational
advantages of SNM over both maximum entropy and RNN LM estimation are probably
its main strength, promising an approach that has the same flexibility in
combining arbitrary features effectively and yet should scale to very large
amounts of data as gracefully as -gram LMs do
Improving Negative Sampling for Word Representation using Self-embedded Features
Although the word-popularity based negative sampler has shown superb
performance in the skip-gram model, the theoretical motivation behind
oversampling popular (non-observed) words as negative samples is still not well
understood. In this paper, we start from an investigation of the gradient
vanishing issue in the skipgram model without a proper negative sampler. By
performing an insightful analysis from the stochastic gradient descent (SGD)
learning perspective, we demonstrate that, both theoretically and intuitively,
negative samples with larger inner product scores are more informative than
those with lower scores for the SGD learner in terms of both convergence rate
and accuracy. Understanding this, we propose an alternative sampling algorithm
that dynamically selects informative negative samples during each SGD update.
More importantly, the proposed sampler accounts for multi-dimensional
self-embedded features during the sampling process, which essentially makes it
more effective than the original popularity-based (one-dimensional) sampler.
Empirical experiments further verify our observations, and show that our
fine-grained samplers gain significant improvement over the existing ones
without increasing computational complexity.Comment: Accepted in WSDM 201
- …