4,218 research outputs found

    Word Embeddings: A Survey

    Full text link
    This work lists and describes the main recent strategies for building fixed-length, dense and distributed representations for words, based on the distributional hypothesis. These representations are now commonly called word embeddings and, in addition to encoding surprisingly good syntactic and semantic information, have been proven useful as extra features in many downstream NLP tasks.Comment: 10 pages, 2 tables, 1 imag

    A Batch Noise Contrastive Estimation Approach for Training Large Vocabulary Language Models

    Full text link
    Training large vocabulary Neural Network Language Models (NNLMs) is a difficult task due to the explicit requirement of the output layer normalization, which typically involves the evaluation of the full softmax function over the complete vocabulary. This paper proposes a Batch Noise Contrastive Estimation (B-NCE) approach to alleviate this problem. This is achieved by reducing the vocabulary, at each time step, to the target words in the batch and then replacing the softmax by the noise contrastive estimation approach, where these words play the role of targets and noise samples at the same time. In doing so, the proposed approach can be fully formulated and implemented using optimal dense matrix operations. Applying B-NCE to train different NNLMs on the Large Text Compression Benchmark (LTCB) and the One Billion Word Benchmark (OBWB) shows a significant reduction of the training time with no noticeable degradation of the models performance. This paper also presents a new baseline comparative study of different standard NNLMs on the large OBWB on a single Titan-X GPU.Comment: Accepted for publication at INTERSPEECH'1

    A Simple Language Model based on PMI Matrix Approximations

    Full text link
    In this study, we introduce a new approach for learning language models by training them to estimate word-context pointwise mutual information (PMI), and then deriving the desired conditional probabilities from PMI at test time. Specifically, we show that with minor modifications to word2vec's algorithm, we get principled language models that are closely related to the well-established Noise Contrastive Estimation (NCE) based language models. A compelling aspect of our approach is that our models are trained with the same simple negative sampling objective function that is commonly used in word2vec to learn word embeddings.Comment: Accepted to EMNLP 201

    Importance Sampling for Objetive Funtion Estimations in Neural Detector Traing Driven by Genetic Algorithms

    Get PDF
    To train Neural Networks (NNs) in a supervised way, estimations of an objective function must be carried out. The value of this function decreases as the training progresses and so, the number of test observations necessary for an accurate estimation has to be increased. Consequently, the training computational cost is unaffordable for very low objective function value estimations, and the use of Importance Sampling (IS) techniques becomes convenient. The study of three different objective functions is considered, which implies the proposal of estimators of the objective function using IS techniques: the Mean-Square error, the Cross Entropy error and the Misclassification error criteria. The values of these functions are estimated by IS techniques, and the results are used to train NNs by the application of Genetic Algorithms. Results for a binary detection in Gaussian noise are provided. These results show the evolution of the parameters during the training and the performances of the proposed detectors in terms of error probability and Receiver Operating Characteristics curves. At the end of the study, the obtained results justify the convenience of using IS in the training
    • …
    corecore