Unsupervised methods for language modeling: technical report no. DCSE/TR-2012-03

Abstract

Language models are crucial for many tasks in NLP and N-grams are the best way to build them. Huge e ort is being invested in improving n-gram language models. By introducing external information (morphology, syntax, partitioning into documents, etc.) into the models a signi cant improvement can be achieved. The models can however be improved with no external information and smoothing is an excellent example of such an improvement. Thesis summarizes the state-of-the-art approaches to unsupervised language modeling with emphases on the in ectional languages, which are particularly hard to model. It is focused on methods that can discover hidden patterns that are already in a training corpora. These patterns can be very useful for enhancing the performance of language modeling, moreover they do not require additional information sources

    Similar works