10 research outputs found

    Automatic interpretation or automatic translation of speech: concepts, definitions and software architecture

    Get PDF
    A interpretação automática (IA), ou tradução automática de fala, é uma tecnologia que traduz discurso oral de uma língua para outra através de três funcionalidades acopladas em um único sistema computacional: reconhecimento automático de fala, tradução automática e síntese de fala. Apresentado pela primeira vez em 1983, durante a convenção ITU Telecom, em Genebra, o conceito de IA veicula a ideia de sistemas capazes de promover a comunicação entre pessoas que falam línguas diferentes de forma espontânea e eficaz (Lee 2015). A IA e seus sistemas, todavia, ainda são escassamente estudados, especialmente na área de Estudos de Interpretação, sendo difundidos, em sua maioria, na área de Ciências da Computação (Pöchhacker 2004). Desse modo, o objetivo geral deste trabalho é apresentar os resultados de uma pesquisa bibliográfico-documental, que teve como proposta investigar como a IA é concebida pelos teóricos que a estudam.Machine Interpreting (MI) or Speech-to-Speech Translation (S2S) is a new technology that converts spoken utterances from one language into another through three functionalities grouped in only one software: Automatic Speech Recognition (ASR), Machine Translation (MT) and Speech Synthesis (or TTS - Text-To-Speech). Firstly presented in 1983, during ITU Telecom Convention, in Geneva, MI consists in a software that enables natural and efficient communication between people from different languages (Lee, 2015). Nevertheless, MI and its components are scarcely studied, especially in Interpreting Studies, generally investigated by the areas of Computer Sciences (Pöchhacker 2004). Following this trend, this paper aims to present the result of a bibliographical research whose proposal was to investigate how MI is conceived by scholars

    Crowdsourcing High-Quality Parallel Data Extraction from Twitter *

    Get PDF
    Abstract High-quality parallel data is crucial for a range of multilingual applications, from tuning and evaluating machine translation systems to cross-lingual annotation projection. Unfortunately, automatically obtained parallel data (which is available in relative abundance) tends to be quite noisy. To obtain high-quality parallel data, we introduce a crowdsourcing paradigm in which workers with only basic bilingual proficiency identify translations from an automatically extracted corpus of parallel microblog messages. For less than $350, we obtained over 5000 parallel segments in five language pairs. Evaluated against expert annotations, the quality of the crowdsourced corpus is significantly better than existing automatic methods: it obtains an performance comparable to expert annotations when used in MERT tuning of a microblog MT system; and training a parallel sentence classifier with it leads also to improved results. The crowdsourced corpora will be made available i

    A linguistically motivated taxonomy for Machine Translation error analysis

    Get PDF
    UID/LIN/03213/2013 SFRH/BD/85737/2012 SFRH/BD/51157/2010 SFRH/BD/51156/2010A detailed error analysis is a fundamental step in every natural lan- guage processing task, as to be able to diagnosis what went wrong will provide cues to decide which are the research directions to be followed. In this paper we focus on error analysis in Machine Translation. We deeply extend previous error taxonomies so that translation errors associated with Romance languages speci- ficities can be accommodated. Also, based on the proposed taxonomy, we carry out an extensive analysis of the errors generated by four di↵erent systems: two mainstream online translation systems Google Translate (Statistical) and Systran (Hybrid Machine Translation) and two in-house Machine Translation systems, in three scenarios representing di↵erent challenges in the translation from English to European Portuguese. Additionally, we comment on how distinct error types di↵erently impact translation quality.publishersversionpublishe

    META-NET Strategic Research Agenda for Multilingual Europe 2020

    Get PDF
    In everyday communication, Europe’s citizens, business partners and politicians are inevitably confronted with language barriers. Language technology has the potential to overcome these barriers and to provide innovative interfaces to technologies and knowledge. This document presents a Strategic Research Agenda for Multilingual Europe 2020. The agenda was prepared by META-NET, a European Network of Excellence. META-NET consists of 60 research centres in 34 countries, who cooperate with stakeholders from economy, government agencies, research organisations, non-governmental organisations, language communities and European universities. META-NET’s vision is high-quality language technology for all European languages. “The research carried out in the area of language technology is of utmost importance for the consolidation of Portuguese as a language of global communication in the information society.” — Dr. Pedro Passos Coelho (Prime-Minister of Portugal) “It is imperative that language technologies for Slovene are developed systematically if we want Slovene to flourish also in the future digital world.” — Dr. Danilo Türk (President of the Republic of Slovenia) “For such small languages like Latvian keeping up with the ever increasing pace of time and technological development is crucial. The only way to ensure future existence of our language is to provide its users with equal opportunities as the users of larger languages enjoy. Therefore being on the forefront of modern technologies is our opportunity.” — Valdis Dombrovskis (Prime Minister of Latvia) “Europe’s inherent multilingualism and our scientific expertise are the perfect prerequisites for significantly advancing the challenge that language technology poses. META-NET opens up new opportunities for the development of ubiquitous multilingual technologies.” — Prof. Dr. Annette Schavan (German Minister of Education and Research

    Low-Resource Unsupervised NMT:Diagnosing the Problem and Providing a Linguistically Motivated Solution

    Get PDF
    Unsupervised Machine Translation hasbeen advancing our ability to translatewithout parallel data, but state-of-the-artmethods assume an abundance of mono-lingual data. This paper investigates thescenario where monolingual data is lim-ited as well, finding that current unsuper-vised methods suffer in performance un-der this stricter setting. We find that theperformance loss originates from the poorquality of the pretrained monolingual em-beddings, and we propose using linguis-tic information in the embedding train-ing scheme. To support this, we look attwo linguistic features that may help im-prove alignment quality: dependency in-formation and sub-word information. Us-ing dependency-based embeddings resultsin a complementary word representationwhich offers a boost in performance ofaround 1.5 BLEU points compared to stan-dardWORD2VECwhen monolingual datais limited to 1 million sentences per lan-guage. We also find that the inclusion ofsub-word information is crucial to improv-ing the quality of the embedding

    Distributed representations for multilingual language processing

    Get PDF
    Distributed representations are a central element in natural language processing. Units of text such as words, ngrams, or characters are mapped to real-valued vectors so that they can be processed by computational models. Representations trained on large amounts of text, called static word embeddings, have been found to work well across a variety of tasks such as sentiment analysis or named entity recognition. More recently, pretrained language models are used as contextualized representations that have been found to yield even better task performances. Multilingual representations that are invariant with respect to languages are useful for multiple reasons. Models using those representations would only require training data in one language and still generalize across multiple languages. This is especially useful for languages that exhibit data sparsity. Further, machine translation models can benefit from source and target representations in the same space. Last, knowledge extraction models could not only access English data, but data in any natural language and thus exploit a richer source of knowledge. Given that several thousand languages exist in the world, the need for multilingual language processing seems evident. However, it is not immediately clear, which properties multilingual embeddings should exhibit, how current multilingual representations work and how they could be improved. This thesis investigates some of these questions. In the first publication, we explore the boundaries of multilingual representation learning by creating an embedding space across more than one thousand languages. We analyze existing methods and propose concept based embedding learning methods. The second paper investigates differences between creating representations for one thousand languages with little data versus considering few languages with abundant data. In the third publication, we refine a method to obtain interpretable subspaces of embeddings. This method can be used to investigate the workings of multilingual representations. The fourth publication finds that multilingual pretrained language models exhibit a high degree of multilinguality in the sense that high quality word alignments can be easily extracted. The fifth paper investigates reasons why multilingual pretrained language models are multilingual despite lacking any kind of crosslingual supervision during training. Based on our findings we propose a training scheme that leads to improved multilinguality. Last, the sixth paper investigates the use of multilingual pretrained language models as multilingual knowledge bases
    corecore