275 research outputs found
Answer Sequence Learning with Neural Networks for Answer Selection in Community Question Answering
In this paper, the answer selection problem in community question answering
(CQA) is regarded as an answer sequence labeling task, and a novel approach is
proposed based on the recurrent architecture for this problem. Our approach
applies convolution neural networks (CNNs) to learning the joint representation
of question-answer pair firstly, and then uses the joint representation as
input of the long short-term memory (LSTM) to learn the answer sequence of a
question for labeling the matching quality of each answer. Experiments
conducted on the SemEval 2015 CQA dataset shows the effectiveness of our
approach.Comment: 6 page
Fact Checking in Community Forums
Community Question Answering (cQA) forums are very popular nowadays, as they
represent effective means for communities around particular topics to share
information. Unfortunately, this information is not always factual. Thus, here
we explore a new dimension in the context of cQA, which has been ignored so
far: checking the veracity of answers to particular questions in cQA forums. As
this is a new problem, we create a specialized dataset for it. We further
propose a novel multi-faceted model, which captures information from the answer
content (what is said and how), from the author profile (who says it), from the
rest of the community forum (where it is said), and from external authoritative
sources of information (external support). Evaluation results show a MAP value
of 86.54, which is 21 points absolute above the baseline.Comment: AAAI-2018; Fact-Checking; Veracity; Community-Question Answering;
Neural Networks; Distributed Representation
Fully Automated Fact Checking Using External Sources
Given the constantly growing proliferation of false claims online in recent
years, there has been also a growing research interest in automatically
distinguishing false rumors from factually true claims. Here, we propose a
general-purpose framework for fully-automatic fact checking using external
sources, tapping the potential of the entire Web as a knowledge source to
confirm or reject a claim. Our framework uses a deep neural network with LSTM
text encoding to combine semantic kernels with task-specific embeddings that
encode a claim together with pieces of potentially-relevant text fragments from
the Web, taking the source reliability into account. The evaluation results
show good performance on two different tasks and datasets: (i) rumor detection
and (ii) fact checking of the answers to a question in community question
answering forums.Comment: RANLP-201
Enhancing Answer Selection in Community Question Answering with Pre-trained and Large Language Models
Community Question Answering (CQA) becomes increasingly prevalent in recent
years. However, there are a large number of answers, which is difficult for
users to select the relevant answers. Therefore, answer selection is a very
significant subtask of CQA. In this paper, we first propose the Question-Answer
cross attention networks (QAN) with pre-trained models for answer selection and
utilize large language model (LLM) to perform answer selection with knowledge
augmentation. Specifically, we apply the BERT model as the encoder layer to do
pre-training for question subjects, question bodies and answers, respectively,
then the cross attention mechanism selects the most relevant answer for
different questions. Experiments show that the QAN model achieves
state-of-the-art performance on two datasets, SemEval2015 and SemEval2017.
Moreover, we use the LLM to generate external knowledge from questions and
correct answers to achieve knowledge augmentation for the answer selection task
by LLM, while optimizing the prompt of LLM in different aspects. The results
show that the introduction of external knowledge can improve the correct answer
selection rate of LLM on datasets SemEval2015 and SemEval2017. Meanwhile, LLM
can also select the correct answer on more questions by optimized prompt.Comment: 24pages, 4 figures, 14table
- …