90,414 research outputs found

    Interlingual Machine Translation

    Get PDF
    Interlingual is an artificial language used to represent the meaning of natural languages, as for purposes of machine translation. It is an intermediate form between two or more languages. Machine translation is the process of translating from source language text into the target language. This paper proposes a new model of machine translation system in which rule-based and example-based approaches are applied for English-to-Kannada/Telugu sentence translation. The proposed method has 4 steps: 1) analyze an English sentence into a string of grammatical nodes, based on Phrase Structure Grammar, 2) map the input pattern with a table of English-Kannada/Telugu sentence patterns, 3) look up the bilingual dictionary for the equivalent Kannada/Telugu words, reorder and then generate output sentences and 4) rank the possible combinations and eliminate the ambiguous output sentences by using a statistical method. The translated sentences will then be stored in a bilingual corpus to serve as a guide or template for imitating the translation, i.e., the example-based approach. The future work will focus on sentence translation by using semantic features to make a more precise translation

    Guidelines to Study Differences in Expressiveness between Ontology Specification Languages: A Case Of Study

    Full text link
    We focus on our experiences on translating ontologies between two ontology languages, FLogic and Ontolingua, in the framework of Methontology and ODE. Rather than building "ad hoc" translators between languages or using KIF, our option consists of translating through ODE intermediate representations. So, we have built direct translators from ODE intermediate representations to Ontolingua and FLogic, and we have also built reverse translators from these two languages to ODE intermediate representations. Expressiveness of the target languages is the main feature to analyse when automatically generating ontologies from ODE intermediate representations. Therefore, we analyse the expressiveness of Ontolingua and FLogic for creating classes, instances, relations, functions and axioms, which are the essential components in ontologies. The motivation for this analysis can be found in the (KA)² initiative and can be easily extended to any other domains and languages

    Sentence Similarity and Machine Translation

    Get PDF
    Neural machine translation (NMT) systems encode an input sentence into an intermediate representation and then decode that representation into the output sentence. Translation requires deep understanding of language; as a result, NMT models trained on large amounts of data develop a semantically rich intermediate representation. We leverage this rich intermediate representation of NMT systems—in particular, multilingual NMT systems, which learn to map many languages into and out of a joint space—for bitext curation, paraphrasing, and automatic machine translation (MT) evaluation. At a high level, all of these tasks are rooted in similarity: sentence and document alignment requires measuring similarity of sentences and documents, respectively; paraphrasing requires producing output which is similar to an input; and automatic MT evaluation requires measuring the similarity between MT system outputs and corresponding human reference translations. We use multilingual NMT for similarity in two ways: First, we use a multilingual NMT model with a fixed-size intermediate representation (Artetxe and Schwenk, 2018) to produce multilingual sentence embeddings, which we use in both sentence and document alignment. Second, we train a multilingual NMT model and show that it generalizes to the task of generative paraphrasing (i.e., “translating” from Russian to Russian), when used in conjunction with a simple generation algorithm to discourage copying from the input to the output. We also use this model for automatic MT evaluation, to force decode and score MT system outputs conditioned on their respective human reference translations. Since we leverage multilingual NMT models, each method works in many languages using a single model. We show that simple methods, which leverage the intermediate representation of multilingual NMT models trained on large amounts of bitext, outperform prior work in paraphrasing, sentence alignment, document alignment, and automatic MT evaluation. This finding is consistent with recent trends in the natural language processing community, where large language models trained on huge amounts of unlabeled text have achieved state-of-the-art results on tasks such as question answering, named entity recognition, and parsing

    Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop

    Get PDF
    We summarize the accomplishments of a multi-disciplinary workshop exploring the computational and scientific issues surrounding the discovery of linguistic units (subwords and words) in a language without orthography. We study the replacement of orthographic transcriptions by images and/or translated text in a well-resourced language to help unsupervised discovery from raw speech.Comment: Accepted to ICASSP 201
    corecore