2 research outputs found

    Enhanced word length and model elimination algorithms for language identification

    Get PDF
    Language identification is the process of determining the natural language of text documents using computational methods. The quality and size of the text available for generating the necessary models has significant impact on the performance of the algorithms used to determine the language of a text. The ability to correctly identify the language of a document is required to ensure the effectiveness of information retrieval systems in a multilingual setting. Unfortunately, existing methods that are used to model natural language have been affected by several limitations. Such limitations include inability to produce reliable models given a small size of training text. Other limitations are: inability to consistently handle multilingual documents, long training times and inability to distinguish closely related languages. The spelling checker technique has been shown to be successful in distinguishing closely related languages but is often hampered by two important constraints: inefficient run time performance and non-availability of spelling checkers for many languages. The aim of this study is to address the problems of language identification by developing improved algorithms that enhance run time performance and accuracy irrespective of the size of corpus available. Therefore, this thesis proposed three algorithms. Firstly, the word length algorithm implements the bag-of-words model using word length information. Secondly, the model elimination algorithm is designed to further improve run time performance by taking advantage of word frequency in training and testing documents. By monitoring the performance of models in the course of processing, this algorithm dynamically selects non-performing models for elimination without compromising accuracy. Thirdly, the linear combination algorithm merges the strengths of the word length and model elimination algorithms by feeding word length features into the model elimination algorithm. Empirical results from the proposed algorithms using test collection from the standard corpora are superior to existing methods in terms of distinguishing closely related languages and multilingual identification. In addition, the word length, model elimination and the linear combination algorithms have better run time performance than the spelling checker method that uses a similar scoring technique, yielding average time gains of 57%, 83% and 98.4% respectively in identification of 140-byte long text

    Language Identification of Search Engine Queries

    No full text
    We consider the language identification problem for search engine queries. First, we propose a method to automatically generate a data set, which uses clickthrough logs of the Yahoo! Search Engine to derive the language of a query indirectly from the language of the documents clicked by the users. Next, we use this data set to train two decision tree classifiers; one that only uses linguistic features and is aimed for textual language identification, and one that additionally uses a non-linguistic feature, and is geared towards the identification of the language intended by the users of the search engine. Our results show that our method produces a highly reliable data set very efficiently, and our decision tree classifier outperforms some of the best methods that have been proposed for the task of written language identification on the domain of search engine queries
    corecore