3 research outputs found

    Attention-based Vocabulary Selection for NMT Decoding

    Full text link
    Neural Machine Translation (NMT) models usually use large target vocabulary sizes to capture most of the words in the target language. The vocabulary size is a big factor when decoding new sentences as the final softmax layer normalizes over all possible target words. To address this problem, it is widely common to restrict the target vocabulary with candidate lists based on the source sentence. Usually, the candidate lists are a combination of external word-to-word aligner, phrase table entries or most frequent words. In this work, we propose a simple and yet novel approach to learn candidate lists directly from the attention layer during NMT training. The candidate lists are highly optimized for the current NMT model and do not need any external computation of the candidate pool. We show significant decoding speedup compared with using the entire vocabulary, without losing any translation quality for two language pairs.Comment: Submitted to Second Conference on Machine Translation (WMT-17); 7 page

    Predicting protein secondary structure with Neural Machine Translation

    Full text link
    We present analysis of a novel tool for protein secondary structure prediction using the recently-investigated Neural Machine Translation framework. The tool provides a fast and accurate folding prediction based on primary structure with subsecond prediction time even for batched inputs. We hypothesize that Neural Machine Translation can improve upon current predictive accuracy by better encoding complex relationships between nearby but non-adjacent amino acids. We overview our modifications to the framework in order to improve accuracy on protein sequences. We report 65.9% Q3 accuracy and analyze the strengths and weaknesses of our predictive model.Comment: 9 pages, 9 figures, 2 table

    Neural Machine Translation: A Review and Survey

    Full text link
    The field of machine translation (MT), the automatic translation of written text from one natural language into another, has experienced a major paradigm shift in recent years. Statistical MT, which mainly relies on various count-based models and which used to dominate MT research for decades, has largely been superseded by neural machine translation (NMT), which tackles translation with a single neural network. In this work we will trace back the origins of modern NMT architectures to word and sentence embeddings and earlier examples of the encoder-decoder network family. We will conclude with a survey of recent trends in the field.Comment: Extended version of "Neural Machine Translation: A Review" accepted by the Journal of Artificial Intelligence Research (JAIR
    corecore