16 research outputs found

    Improving the quality of Gujarati-Hindi Machine Translation through part-of-speech tagging and stemmer-assisted transliteration

    Get PDF
    Machine Translation for Indian languages is an emerging research area. Transliteration is one such module that we design while designing a translation system. Transliteration means mapping of source language text into the target language. Simple mapping decreases the efficiency of overall translation system. We propose the use of stemming and part-of-speech tagging for transliteration. The effectiveness of translation can be improved if we use part-of-speech tagging and stemming assisted transliteration.We have shown that much of the content in Gujarati gets transliterated while being processed for translation to Hindi language

    Telugu Text Categorization using Language Models

    Get PDF
    Document categorization has become an emerging technique in the field of research due to the abundance of documents available in digital form. In this paper we propose language dependent and independent models applicable to categorization of Telugu documents. India is a multilingual country; a provision is made for each of the Indian states to choose their own authorized language for communicating at the state level for legitimate purpose. The availability of constantly increasing amount of textual data of various Indian regional languages in electronic form has accelerated. Hence, the Classification of text documents based on languages is crucial. Telugu is the third most spoken language in India and one of the fifteen most spoken language n the world. It is the official language of the states of Telangana and Andhra Pradesh. A variant of k-nearest neighbors algorithm used for categorization process. The results obtained by the Comparisons of language dependent and independent models

    Nlp Challenges for Machine Translation from English to Indian Languages

    Get PDF
    This Natural Langauge processing is carried particularly on English-Kannada/Telugu. Kannada is a language of India. The Kannada language has a classification of Dravidian, Southern, Tamil-Kannada, and Kannada. Regions Spoken: Kannada is also spoken in Karnataka, Andhra Pradesh, Tamil Nadu, and Maharashtra. Population: The total population of people who speak Kannada is 35,346,000, as of 1997. Alternate Name: Other names for Kannada are Kanarese, Canarese, Banglori, and Madrassi. Dialects: Some dialects of Kannada are Bijapur, Jeinu Kuruba, and Aine Kuruba. There are about 20 dialects and Badaga may be one. Kannada is the state language of Karnataka. About 9,000,000 people speak Kannada as a second language. The literacy rate for people who speak Kannada as a first language is about 60%, which is the same for those who speak Kannada as a second language (in India). Kannada was used in the Bible from 1831-2000. Statistical machine translation (SMT) is a machine translation paradigm where translations are generated on the basis of statistical models whose parameters are derived from the analysis of bilingual text corpora. The statistical approach contrasts with the rule-based approaches to machine translation as well as with example-based machine translatio

    Automatic summarization of Malayalam documents using clause identification method

    Get PDF
    Text summarization is an active research area in the field of natural language processing. Huge amount of information in the internet necessitates the development of automatic summarization systems. There are two types of summarization techniques: Extractive and Abstractive. Extractive summarization selects important sentences from the text and produces summary as it is present in the original document. Abstractive summarization systems will provide a summary of the input text as is generated by human beings. Abstractive summary requires semantic analysis of text. Limited works have been carried out in the area of abstractive summarization in Indian languages especially in Malayalam. Only extractive summarization methods are proposed in Malayalam. In this paper, an abstractive summarization system for Malayalam documents using clause identification method is proposed. As part of this research work, a POS tagger and a morphological analyzer for Malayalam words in cricket domain are also developed. The clauses from input sentences are identified using a modified clause identification algorithm. The clauses are then semantically analyzed using an algorithm to identify semantic triples - subject, object and predicate. The score of each clause is then calculated by using feature extraction and the important clauses which are to be included in the summary are selected based on this score. Finally an algorithm is used to generate the sentences from the semantic triples of the selected clauses which is the abstractive summary of input documents

    Sentiment analysis on Twitter data using machine learning

    Get PDF
    In the world of social media people are more responsive towards product or certain events that are currently occurring. This response given by the user is in form of raw textual data (Semi Structured Data) in different languages and terms, which contains noise in data as well as critical information that encourage the analyst to discover knowledge and pattern from the dataset available. This is useful for decision making and taking strategic decision for the future market. To discover this unknown information from the linguistic data Natural Language Processing (NLP) and Data Mining techniques are most focused research terms used for sentiment analysis. In the derived approach the analysis on Twitter data to detect sentiment of the people throughout the world using machine learning techniques. Here the data set available for research is from Twitter for world cup Soccer 2014, held in Brazil. During this period, many people had given their opinion, emotion and attitude about the game, promotion, players. By filtering and analyzing the data using natural language processing techniques, and sentiment polarity has been calculated based on the emotion word detected in the user tweets. The data set is normalized to be used by machine learning algorithm and prepared using natural language processing techniques like Word Tokenization, Stemming and lemmatization, POS (Part of speech) Tagger, NER (Name Entity recognition) and parser to extract emotions for the textual data from each tweet. This approach is implemented using Python programming language and Natural Language Toolkit (NLTK), which is openly available for academic as well as for research purpose. Derived algorithm extracts emotional words using WordNet with its POS (Part-of-Speech) for the word in a sentence that has a meaning in current context, and is assigned sentiment polarity using ‘SentWordNet’ Dictionary or using lexicon based method. The resultant polarity assigned is further analyzed using Naïve Bayes and SVM (support vector Machine) machine learning algorithm and visualized data on WEKA platform. Finally, the goal is to compare both the results of implementation and prove the best approach for sentiment analysis on social media for semi structured data.Master of Science (MSc) in Computational Science

    Aspects of the Syntax, Production and Pragmatics of code-switching - with special reference to Cantonese-English

    Get PDF
    This dissertation argues for the position that code-switching utterances are constrained by the same set of mechanisms as those which govern monolingual utterances. While this thesis is in line with more recent code-switching theories (e.g. Belazi et al. 1994, MacSwan 1997, Mahootian 1993), this dissertation differs from those works in making two specific claims: Firstly, functional categories and lexical categories exhibit different syntactic behaviour in code-switching. Secondly, codeswitching is subject to the same principles not only in syntax, but also in production and pragmatics. Chapter 2 presents a critical review of constraints and processing models previously proposed in the literature. It is suggested that in view of the vast variety of data, no existing model is completely adequate. Nevertheless, it is argued that a model which does not postulate syntactic constraints (along the lines of Mahootian 1993, MacSwan 1997) or production principles (along the lines of de Bot 1992) specific to code switching is to be preferred on cognitive and theoretical grounds. Chapter 3 concerns word order between lexical heads and their complements in code-switching. It is shown that the language of a lexical head (i.e. noun or verb) may or may not determine the word order of its complement. Chapter 4 investigates word order between functional heads and their complements in code-switching. Contrary to the case with lexical categories, the language of functional heads (e.g. D, I and C) is shown to determine the word order of their complements in code-switching. It is proposed that word order between heads (lexical or functional) and complements is governed by head-parameters, and the difference between lexical heads and functional heads is due to their differential processing and production in terms of Levelt's (1989) algorithm. Chapter 5 investigates the selection properties of functional categories in codeswitching, with special reference to Cantonese-English. Contrary to the Functional Head Constraint (Belazi et al. 1994), it is shown that code-switching can occur freely between functional heads and their complements, provided that the c-selection requirements of the functional heads are satisfied. Chapter 6 investigates the selection properties of lexical categories in code-switching, again with special reference to Cantonese-English. It is shown that "language-specific" c-selection properties need not be observed: a Cantonese verb may take an English DP whereas an English verb may take a Cantonese demonstrative phrase (DemP). Similar phenomena are drawn from other language-pairs involving a language with morphological case and a language without morphological case. The difference between functional categories and lexical categories in their selection properties is again explained in terms of the different production processes they undergo. Chapter 7 is devoted to prepositions which have been problematic in terms of their status as a functional category or a lexical category. Based on the behaviour of prepositions in code-switching, it is suggested that prepositions display a dual character. It is proposed that prepositions may well point to the fact that the conventional dichotomy between functional categories and lexical categories is not a primitive one in the lexicon. Chapter 8 looks at code-switching in a wider perspective. and explores the pragmatic determinants of code-switching in the light of Relevance Theory (Sperber and Wilson 1995). It is argued that many types of code-switching (e.g. repetitions, quotations, etc.) are motivated by the desire to optimize the "relevance" of a message, with "relevance" as defined in Relevance Theory

    Iterated learning framework for unsupervised part-of-speech induction

    Get PDF
    Computational approaches to linguistic analysis have been used for more than half a century. The main tools come from the field of Natural Language Processing (NLP) and are based on rule-based or corpora-based (supervised) methods. Despite the undeniable success of supervised learning methods in NLP, they have two main drawbacks: on the practical side, it is expensive to produce the manual annotation (or the rules) required and it is not easy to find annotators for less common languages. A theoretical disadvantage is that the computational analysis produced is tied to a specific theory or annotation scheme. Unsupervised methods offer the possibility to expand our analyses into more resourcepoor languages, and to move beyond the conventional linguistic theories. They are a way of observing patterns and regularities emerging directly from the data and can provide new linguistic insights. In this thesis I explore unsupervised methods for inducing parts of speech across languages. I discuss the challenges in evaluation of unsupervised learning and at the same time, by looking at the historical evolution of part-of-speech systems, I make the case that the compartmentalised, traditional pipeline approach of NLP is not ideal for the task. I present a generative Bayesian system that makes it easy to incorporate multiple diverse features, spanning different levels of linguistic structure, like morphology, lexical distribution, syntactic dependencies and word alignment information that allow for the examination of cross-linguistic patterns. I test the system using features provided by unsupervised systems in a pipeline mode (where the output of one system is the input to another) and show that the performance of the baseline (distributional) model increases significantly, reaching and in some cases surpassing the performance of state-of-the-art part-of-speech induction systems. I then turn to the unsupervised systems that provided these sources of information (morphology, dependencies, word alignment) and examine the way that part-of-speech information influences their inference. Having established a bi-directional relationship between each system and my part-of-speech inducer, I describe an iterated learning method, where each component system is trained using the output of the other system in each iteration. The iterated learning method improves the performance of both component systems in each task. Finally, using this iterated learning framework, and by using parts of speech as the central component, I produce chains of linguistic structure induction that combine all the component systems to offer a more holistic view of NLP. To show the potential of this multi-level system, I demonstrate its use ‘in the wild’. I describe the creation of a vastly multilingual parallel corpus based on 100 translations of the Bible in a diverse set of languages. Using the multi-level induction system, I induce cross-lingual clusters, and provide some qualitative results of my approach. I show that it is possible to discover similarities between languages that correspond to ‘hidden’ morphological, syntactic or semantic elements

    Using community trained recommender models for enhanced information retrieval

    Get PDF
    Research in Information Retrieval (IR) seeks to develop methods which better assist users in finding information which is relevant to their current information needs. Personalization is a significant focus of research for the development of next generation of IR systems. Commercial search engines are exploring methods to incorporate models of the user’s interests to facilitate personalization in IR to improve retrieval effectiveness. However, in some situations there may be no opportunity to learn about the interests of a specific user on a certain topic. This is a significant challenge for IR researchers attempting to improve search effectiveness by exploiting user search behaviour. We propose a solution to this problem based on recommender systems (RSs) in a novel IR model which combines a recommender model with traditional IR methods to improve retrieval results for search tasks, where the IR system has no opportunity to acquire prior information about the user’s knowledge of a domain for which they have not previously entered a query. We use search behaviour data from other previous users to build topic category models based on topic interests. When a user enters a query on a topic which is new to this user, but related to a topical search category, the appropriate topic category model is selected and used to predict a ranking which this user may find interesting based on previous search behaviour. The recommender outputs are used in combination with the output of a standard IR system to produce the overall output to the user. In this thesis, the IR and recommender components of this integrated model are investigated
    corecore