1,512 research outputs found
A Deep Network Model for Paraphrase Detection in Short Text Messages
This paper is concerned with paraphrase detection. The ability to detect
similar sentences written in natural language is crucial for several
applications, such as text mining, text summarization, plagiarism detection,
authorship authentication and question answering. Given two sentences, the
objective is to detect whether they are semantically identical. An important
insight from this work is that existing paraphrase systems perform well when
applied on clean texts, but they do not necessarily deliver good performance
against noisy texts. Challenges with paraphrase detection on user generated
short texts, such as Twitter, include language irregularity and noise. To cope
with these challenges, we propose a novel deep neural network-based approach
that relies on coarse-grained sentence modeling using a convolutional neural
network and a long short-term memory model, combined with a specific
fine-grained word-level similarity matching model. Our experimental results
show that the proposed approach outperforms existing state-of-the-art
approaches on user-generated noisy social media data, such as Twitter texts,
and achieves highly competitive performance on a cleaner corpus
A Continuously Growing Dataset of Sentential Paraphrases
A major challenge in paraphrase research is the lack of parallel corpora. In
this paper, we present a new method to collect large-scale sentential
paraphrases from Twitter by linking tweets through shared URLs. The main
advantage of our method is its simplicity, as it gets rid of the classifier or
human in the loop needed to select data before annotation and subsequent
application of paraphrase identification algorithms in the previous work. We
present the largest human-labeled paraphrase corpus to date of 51,524 sentence
pairs and the first cross-domain benchmarking for automatic paraphrase
identification. In addition, we show that more than 30,000 new sentential
paraphrases can be easily and continuously captured every month at ~70%
precision, and demonstrate their utility for downstream NLP tasks through
phrasal paraphrase extraction. We make our code and data freely available.Comment: 11 pages, accepted to EMNLP 201
Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features
The recent tremendous success of unsupervised word embeddings in a multitude
of applications raises the obvious question if similar methods could be derived
to improve embeddings (i.e. semantic representations) of word sequences as
well. We present a simple but efficient unsupervised objective to train
distributed representations of sentences. Our method outperforms the
state-of-the-art unsupervised models on most benchmark tasks, highlighting the
robustness of the produced general-purpose sentence embeddings.Comment: NAACL 201
Representation learning for very short texts using weighted word embedding aggregation
Short text messages such as tweets are very noisy and sparse in their use of
vocabulary. Traditional textual representations, such as tf-idf, have
difficulty grasping the semantic meaning of such texts, which is important in
applications such as event detection, opinion mining, news recommendation, etc.
We constructed a method based on semantic word embeddings and frequency
information to arrive at low-dimensional representations for short texts
designed to capture semantic similarity. For this purpose we designed a
weight-based model and a learning procedure based on a novel median-based loss
function. This paper discusses the details of our model and the optimization
methods, together with the experimental results on both Wikipedia and Twitter
data. We find that our method outperforms the baseline approaches in the
experiments, and that it generalizes well on different word embeddings without
retraining. Our method is therefore capable of retaining most of the semantic
information in the text, and is applicable out-of-the-box.Comment: 8 pages, 3 figures, 2 tables, appears in Pattern Recognition Letter
A Semi-automatic Method for Efficient Detection of Stories on Social Media
Twitter has become one of the main sources of news for many people. As
real-world events and emergencies unfold, Twitter is abuzz with hundreds of
thousands of stories about the events. Some of these stories are harmless,
while others could potentially be life-saving or sources of malicious rumors.
Thus, it is critically important to be able to efficiently track stories that
spread on Twitter during these events. In this paper, we present a novel
semi-automatic tool that enables users to efficiently identify and track
stories about real-world events on Twitter. We ran a user study with 25
participants, demonstrating that compared to more conventional methods, our
tool can increase the speed and the accuracy with which users can track stories
about real-world events.Comment: ICWSM'16, May 17-20, Cologne, Germany. In Proceedings of the 10th
International AAAI Conference on Weblogs and Social Media (ICWSM 2016).
Cologne, German
ParaPhraser: Russian paraphrase corpus and shared task
The paper describes the results of the First Russian Paraphrase Detection Shared Task held in St.-Petersburg, Russia, in October 2016. Research in the area of paraphrase extraction, detection and generation has been successfully developing for a long time while there has been only a recent surge of interest towards the problem in the Russian community of computational linguistics. We try to overcome this gap by introducing the project ParaPhraser.ru dedicated to the collection of Russian paraphrase corpus and organizing a Paraphrase Detection Shared Task, which uses the corpus as the training data. The participants of the task applied a wide variety of techniques to the problem of paraphrase detection, from rule-based approaches to deep learning, and results of the task reflect the following tendencies: the best scores are obtained by the strategy of using traditional classifiers combined with fine-grained linguistic features, however, complex neural networks, shallow methods and purely technical methods also demonstrate competitive results.Peer reviewe
- …