982 research outputs found
A Continuously Growing Dataset of Sentential Paraphrases
A major challenge in paraphrase research is the lack of parallel corpora. In
this paper, we present a new method to collect large-scale sentential
paraphrases from Twitter by linking tweets through shared URLs. The main
advantage of our method is its simplicity, as it gets rid of the classifier or
human in the loop needed to select data before annotation and subsequent
application of paraphrase identification algorithms in the previous work. We
present the largest human-labeled paraphrase corpus to date of 51,524 sentence
pairs and the first cross-domain benchmarking for automatic paraphrase
identification. In addition, we show that more than 30,000 new sentential
paraphrases can be easily and continuously captured every month at ~70%
precision, and demonstrate their utility for downstream NLP tasks through
phrasal paraphrase extraction. We make our code and data freely available.Comment: 11 pages, accepted to EMNLP 201
Contextualized Translation of Automatically Segmented Speech
Direct speech-to-text translation (ST) models are usually trained on corpora
segmented at sentence level, but at inference time they are commonly fed with
audio split by a voice activity detector (VAD). Since VAD segmentation is not
syntax-informed, the resulting segments do not necessarily correspond to
well-formed sentences uttered by the speaker but, most likely, to fragments of
one or more sentences. This segmentation mismatch degrades considerably the
quality of ST models' output. So far, researchers have focused on improving
audio segmentation towards producing sentence-like splits. In this paper,
instead, we address the issue in the model, making it more robust to a
different, potentially sub-optimal segmentation. To this aim, we train our
models on randomly segmented data and compare two approaches: fine-tuning and
adding the previous segment as context. We show that our context-aware solution
is more robust to VAD-segmented input, outperforming a strong base model and
the fine-tuning on different VAD segmentations of an English-German test set by
up to 4.25 BLEU points.Comment: Interspeech 202
Con-S2V: A Generic Framework for Incorporating Extra-Sentential Context into Sen2Vec
We present a novel approach to learn distributed representation of sentences from unlabeled data by modeling both content and context of a sentence. The content model learns sentence representation by predicting its words. On the other hand, the context model comprises a neighbor prediction component and a regularizer to model distributional and proximity hypotheses, respectively. We propose an online algorithm to train the model components jointly. We evaluate the models in a setup, where contextual information is available. The experimental results on tasks involving classification, clustering, and ranking of sentences show that our model outperforms the best existing models by a wide margin across multiple datasets
- …