4 research outputs found

    Correlated Bigram LSA for Unsupervised LM adaptation

    Get PDF

    A Survey on Language Modeling using Neural Networks

    Get PDF
    A Language Model (LM) is a helpful component of a variety of Natural Language Processing (NLP) systems today. For speech recognition, machine translation, information retrieval, word sense disambiguation etc., the contribution of an LM is to provide features and indications on the probability of word sequences, their grammaticality and semantical meaningfulness. What makes language modeling a challenge for Machine Learning algorithms is the sheer amount of possible word sequences: the curse of dimensionality is especially encountered when modeling natural language. The survey will summarize and group literature that has addressed this problem and we will examine promising recent research on Neural Network techniques applied to language modeling in order to overcome the mentioned curse and to achieve better generalizations over word sequences

    Training connectionist models for the structured language model

    No full text

    Training Connectionist Models for the Structured Language Model

    No full text
    We investigate the performance of the Structured Language Model (SLM) in terms of perplexity (PPL) when its components are modeled by connectionist models
    corecore