1,093 research outputs found
Large Scale Question Paraphrase Retrieval with Smoothed Deep Metric Learning
The goal of a Question Paraphrase Retrieval (QPR) system is to retrieve
equivalent questions that result in the same answer as the original question.
Such a system can be used to understand and answer rare and noisy
reformulations of common questions by mapping them to a set of canonical forms.
This has large-scale applications for community Question Answering (cQA) and
open-domain spoken language question answering systems. In this paper we
describe a new QPR system implemented as a Neural Information Retrieval (NIR)
system consisting of a neural network sentence encoder and an approximate
k-Nearest Neighbour index for efficient vector retrieval. We also describe our
mechanism to generate an annotated dataset for question paraphrase retrieval
experiments automatically from question-answer logs via distant supervision. We
show that the standard loss function in NIR, triplet loss, does not perform
well with noisy labels. We propose smoothed deep metric loss (SDML) and with
our experiments on two QPR datasets we show that it significantly outperforms
triplet loss in the noisy label setting
Predicting the Quality of Short Narratives from Social Media
An important and difficult challenge in building computational models for
narratives is the automatic evaluation of narrative quality. Quality evaluation
connects narrative understanding and generation as generation systems need to
evaluate their own products. To circumvent difficulties in acquiring
annotations, we employ upvotes in social media as an approximate measure for
story quality. We collected 54,484 answers from a crowd-powered
question-and-answer website, Quora, and then used active learning to build a
classifier that labeled 28,320 answers as stories. To predict the number of
upvotes without the use of social network features, we create neural networks
that model textual regions and the interdependence among regions, which serve
as strong benchmarks for future research. To our best knowledge, this is the
first large-scale study for automatic evaluation of narrative quality.Comment: 7 pages, 2 figures. Accepted at the 2017 IJCAI conferenc
Language Use Matters: Analysis of the Linguistic Structure of Question Texts Can Characterize Answerability in Quora
Quora is one of the most popular community Q&A sites of recent times.
However, many question posts on this Q&A site often do not get answered. In
this paper, we quantify various linguistic activities that discriminates an
answered question from an unanswered one. Our central finding is that the way
users use language while writing the question text can be a very effective
means to characterize answerability. This characterization helps us to predict
early if a question remaining unanswered for a specific time period t will
eventually be answered or not and achieve an accuracy of 76.26% (t = 1 month)
and 68.33% (t = 3 months). Notably, features representing the language use
patterns of the users are most discriminative and alone account for an accuracy
of 74.18%. We also compare our method with some of the similar works (Dror et
al., Yang et al.) achieving a maximum improvement of ~39% in terms of accuracy.Comment: 1 figure, 3 tables, ICWSM 2017 as poste
Adversarial Domain Adaptation for Duplicate Question Detection
We address the problem of detecting duplicate questions in forums, which is
an important step towards automating the process of answering new questions. As
finding and annotating such potential duplicates manually is very tedious and
costly, automatic methods based on machine learning are a viable alternative.
However, many forums do not have annotated data, i.e., questions labeled by
experts as duplicates, and thus a promising solution is to use domain
adaptation from another forum that has such annotations. Here we focus on
adversarial domain adaptation, deriving important findings about when it
performs well and what properties of the domains are important in this regard.
Our experiments with StackExchange data show an average improvement of 5.6%
over the best baseline across multiple pairs of domains.Comment: EMNLP 2018 short paper - camera ready. 8 page
- …