7,752 research outputs found

    Improving Sampling-based Alignment by Investigating the Distribution of N-grams in Phrase Translation Tables

    Get PDF

    Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features

    Get PDF
    The recent tremendous success of unsupervised word embeddings in a multitude of applications raises the obvious question if similar methods could be derived to improve embeddings (i.e. semantic representations) of word sequences as well. We present a simple but efficient unsupervised objective to train distributed representations of sentences. Our method outperforms the state-of-the-art unsupervised models on most benchmark tasks, highlighting the robustness of the produced general-purpose sentence embeddings.Comment: NAACL 201

    Can Word Segmentation be Considered Harmful for Statistical Machine Translation Tasks between Japanese and Chinese?

    Get PDF

    Unsupervised Machine Translation Using Cross-Lingual N-gram Embeddings

    Get PDF
    Praegused parimad masintõlke süsteemid saavutavad suurepäraseid tulemusi, kuid nõuavad tulemuste saamiseks suuri paralleelkorpusi. Palju tööd on tehtud, et saada häid tõlketulemusi väikese paralleelkorpusega keeltepaaridele, aga võrreldavaid tulemusi suurte paaralleelkorpusega keeltele pole saadud. Selles töös ma pakun välja uudse süsteemi, mis teeb juhendamata masintõlget kasutades n-grammide(fraaside) vektoresitusi, mille abil õpitakse keeltevahelisi fraaside vektoresitusi. Minu lahendus nõuab ainult ühekeelseid korpuseid. Ma raporteerin oma tulemused eesti - inglise - eesti keelepaari vahel. Arendatud süsteem ei tööta nii hästi kui loodetud, aga testide järgi võib öelda, et see töötab paremini kui, sõna-sõnalt otse tõlkida.The current best machine translation systems have achieved excellent results, but rely heavily on large parallel corpora. There have been many attempts on getting the same good results on low-resource languages, but these tries have been somewhat unsuccessful. In this work, I propose a novel unsupervised machine translation system that uses n-gram embeddings for getting the translations, by learning cross-lingual embeddings. This solution requires only monolingual corpora, not a single parallel sentence is needed, which is achieved by using unsupervised word translation. I report my findings for Estonian - English - Estonian language pair. The solution does not work as well as expected, but tests suggest that it works better than simple word-by-word translation

    Supervised Learning for Multi-Domain Text Classification

    Get PDF
    Digital information available on the Internet is increasing day by day. As a result of this, the demand for tools that help people in finding and analyzing all these resources are also growing in number. Text Classification, in particular, has been very useful in managing the information. Text Classification is the process of assigning natural language text to one or more categories based on the content. It has many important applications in the real world. For example, finding the sentiment of the reviews, posted by people on restaurants, movies and other such things are all applications of Text classification. In this project, focus has been laid on Sentiment Analysis, which identifies the opinions expressed in a piece of text. It involves categorizing opinions in text into categories like \u27positive\u27 or \u27negative\u27. Existing works in Sentiment Analysis focused on determining the polarity (Positive or negative) of a sentence. This comes under binary classification, which means classifying the given set of elements into two groups. The purpose of this research is to address a different approach for Sentiment Analysis called Multi Class Sentiment Classification. In this approach the sentences are classified under multiple sentiment classes like positive, negative, neutral and so on. Classifiers are built on the Predictive Model, that consists of multiple phases. Analysis of different sets of features on the data set, like stemmers, n-grams, tf-idf and so on, will be considered for classification of the data. Different classification models like Bayesian Classifier, Random Forest and SGD classifier are taken into consideration for classifying the data and their results are compared. Frameworks like Weka, Apache Mahout and Scikit are used for building the classifiers
    corecore