5 research outputs found

    Low-Resource Unsupervised NMT:Diagnosing the Problem and Providing a Linguistically Motivated Solution

    Get PDF
    Unsupervised Machine Translation hasbeen advancing our ability to translatewithout parallel data, but state-of-the-artmethods assume an abundance of mono-lingual data. This paper investigates thescenario where monolingual data is lim-ited as well, finding that current unsuper-vised methods suffer in performance un-der this stricter setting. We find that theperformance loss originates from the poorquality of the pretrained monolingual em-beddings, and we propose using linguis-tic information in the embedding train-ing scheme. To support this, we look attwo linguistic features that may help im-prove alignment quality: dependency in-formation and sub-word information. Us-ing dependency-based embeddings resultsin a complementary word representationwhich offers a boost in performance ofaround 1.5 BLEU points compared to stan-dardWORD2VECwhen monolingual datais limited to 1 million sentences per lan-guage. We also find that the inclusion ofsub-word information is crucial to improv-ing the quality of the embedding

    Handling Ellipsis in a Spoken Medical Phraselator

    No full text
    We consider methods for handling incomplete (elliptical) utterances in spoken phraselators, and describe how they have been implemented inside BabelDr, a substantial spoken medical phraselator. The challenge is to extend the phrase matching process so that it is sensitive to preceding dialogue context. We contrast two methods, one using limited-vocabulary strict grammar-based speech and language processing and one using large-vocabulary speech recognition with fuzzy grammar-based processing, and present an initial evaluation on a spoken corpus of 821 context-sentence/elliptical-phrase pairs. The large-vocabulary/fuzzy method strongly outperforms the limited-vocabulary/strict method over the whole corpus, though it is slightly inferior for the subset that is within grammar coverage. We investigate possibilities for combining the two processing paths, using several machine learning frameworks, and demonstrate that hybrid methods strongly outperform the largevocabulary/fuzzy method
    corecore