2,106 research outputs found
Multitask Learning with Deep Neural Networks for Community Question Answering
In this paper, we developed a deep neural network (DNN) that learns to solve simultaneously the three tasks of the cQA challenge proposed by the SemEval-2016 Task 3, i.e., question-comment similarity, question-question similarity and new question-comment similarity. The latter is the main task, which can exploit the previous two for achieving better results. Our DNN is trained jointly on all the three cQA tasks and learns to encode questions and comments into a single vector representation shared across the multiple tasks. The results on the official challenge test set show that our approach produces higher accuracy and faster convergence rates than the individual neural networks. Additionally, our method, which does not use any manual feature engineering, approaches the state of the art established with methods that make heavy use of it
Adversarial Domain Adaptation for Duplicate Question Detection
We address the problem of detecting duplicate questions in forums, which is
an important step towards automating the process of answering new questions. As
finding and annotating such potential duplicates manually is very tedious and
costly, automatic methods based on machine learning are a viable alternative.
However, many forums do not have annotated data, i.e., questions labeled by
experts as duplicates, and thus a promising solution is to use domain
adaptation from another forum that has such annotations. Here we focus on
adversarial domain adaptation, deriving important findings about when it
performs well and what properties of the domains are important in this regard.
Our experiments with StackExchange data show an average improvement of 5.6%
over the best baseline across multiple pairs of domains.Comment: EMNLP 2018 short paper - camera ready. 8 page
Neural Skill Transfer from Supervised Language Tasks to Reading Comprehension
Reading comprehension is a challenging task in natural language processing
and requires a set of skills to be solved. While current approaches focus on
solving the task as a whole, in this paper, we propose to use a neural network
`skill' transfer approach. We transfer knowledge from several lower-level
language tasks (skills) including textual entailment, named entity recognition,
paraphrase detection and question type classification into the reading
comprehension model.
We conduct an empirical evaluation and show that transferring language skill
knowledge leads to significant improvements for the task with much fewer steps
compared to the baseline model. We also show that the skill transfer approach
is effective even with small amounts of training data. Another finding of this
work is that using token-wise deep label supervision for text classification
improves the performance of transfer learning
Large Scale Question Paraphrase Retrieval with Smoothed Deep Metric Learning
The goal of a Question Paraphrase Retrieval (QPR) system is to retrieve
equivalent questions that result in the same answer as the original question.
Such a system can be used to understand and answer rare and noisy
reformulations of common questions by mapping them to a set of canonical forms.
This has large-scale applications for community Question Answering (cQA) and
open-domain spoken language question answering systems. In this paper we
describe a new QPR system implemented as a Neural Information Retrieval (NIR)
system consisting of a neural network sentence encoder and an approximate
k-Nearest Neighbour index for efficient vector retrieval. We also describe our
mechanism to generate an annotated dataset for question paraphrase retrieval
experiments automatically from question-answer logs via distant supervision. We
show that the standard loss function in NIR, triplet loss, does not perform
well with noisy labels. We propose smoothed deep metric loss (SDML) and with
our experiments on two QPR datasets we show that it significantly outperforms
triplet loss in the noisy label setting
- …