67 research outputs found
Efficient Learning for Undirected Topic Models
Replicated Softmax model, a well-known undirected topic model, is powerful in
extracting semantic representations of documents. Traditional learning
strategies such as Contrastive Divergence are very inefficient. This paper
provides a novel estimator to speed up the learning based on Noise Contrastive
Estimate, extended for documents of variant lengths and weighted inputs.
Experiments on two benchmarks show that the new estimator achieves great
learning efficiency and high accuracy on document retrieval and classification.Comment: Accepted by ACL-IJCNLP 2015 short paper. 6 page
Learning to Translate in Real-time with Neural Machine Translation
Translating in real-time, a.k.a. simultaneous translation, outputs
translation words before the input sentence ends, which is a challenging
problem for conventional machine translation methods. We propose a neural
machine translation (NMT) framework for simultaneous translation in which an
agent learns to make decisions on when to translate from the interaction with a
pre-trained NMT environment. To trade off quality and delay, we extensively
explore various targets for delay and design a method for beam-search
applicable in the simultaneous MT setting. Experiments against state-of-the-art
baselines on two language pairs demonstrate the efficacy of the proposed
framework both quantitatively and qualitatively.Comment: 10 pages, camera read
Neural Machine Translation with Byte-Level Subwords
Almost all existing machine translation models are built on top of
character-based vocabularies: characters, subwords or words. Rare characters
from noisy text or character-rich languages such as Japanese and Chinese
however can unnecessarily take up vocabulary slots and limit its compactness.
Representing text at the level of bytes and using the 256 byte set as
vocabulary is a potential solution to this issue. High computational cost has
however prevented it from being widely deployed or used in practice. In this
paper, we investigate byte-level subwords, specifically byte-level BPE (BBPE),
which is compacter than character vocabulary and has no out-of-vocabulary
tokens, but is more efficient than using pure bytes only is. We claim that
contextualizing BBPE embeddings is necessary, which can be implemented by a
convolutional or recurrent layer. Our experiments show that BBPE has comparable
performance to BPE while its size is only 1/8 of that for BPE. In the
multilingual setting, BBPE maximizes vocabulary sharing across many languages
and achieves better translation quality. Moreover, we show that BBPE enables
transferring models between languages with non-overlapping character sets
VizSeq: A Visual Analysis Toolkit for Text Generation Tasks
Automatic evaluation of text generation tasks (e.g. machine translation, text
summarization, image captioning and video description) usually relies heavily
on task-specific metrics, such as BLEU and ROUGE. They, however, are abstract
numbers and are not perfectly aligned with human assessment. This suggests
inspecting detailed examples as a complement to identify system error patterns.
In this paper, we present VizSeq, a visual analysis toolkit for instance-level
and corpus-level system evaluation on a wide variety of text generation tasks.
It supports multimodal sources and multiple text references, providing
visualization in Jupyter notebook or a web app interface. It can be used
locally or deployed onto public servers for centralized data hosting and
benchmarking. It covers most common n-gram based metrics accelerated with
multiprocessing, and also provides latest embedding-based metrics such as
BERTScore
- …