3 research outputs found
Answer Ranking for Product-Related Questions via Multiple Semantic Relations Modeling
Many E-commerce sites now offer product-specific question answering platforms
for users to communicate with each other by posting and answering questions
during online shopping. However, the multiple answers provided by ordinary
users usually vary diversely in their qualities and thus need to be
appropriately ranked for each question to improve user satisfaction. It can be
observed that product reviews usually provide useful information for a given
question, and thus can assist the ranking process. In this paper, we
investigate the answer ranking problem for product-related questions, with the
relevant reviews treated as auxiliary information that can be exploited for
facilitating the ranking. We propose an answer ranking model named MUSE which
carefully models multiple semantic relations among the question, answers, and
relevant reviews. Specifically, MUSE constructs a multi-semantic relation graph
with the question, each answer, and each review snippet as nodes. Then a
customized graph convolutional neural network is designed for explicitly
modeling the semantic relevance between the question and answers, the content
consistency among answers, and the textual entailment between answers and
reviews. Extensive experiments on real-world E-commerce datasets across three
product categories show that our proposed model achieves superior performance
on the concerned answer ranking task.Comment: Accepted by SIGIR 202
Length-adaptive Neural Network for Answer Selection
Answer selection focuses on selecting the correct answer for a question. Most previous work on answer selection achieves good performance by employing an RNN, which processes all question and answer sentences with the same feature extractor regardless of the sentence length. These methods often encounter the problem of long-term dependencies. To address this issue, we propose a Length-adaptive Neural Network (LaNN) for answer selection that can auto-select a neural feature extractor according to the length of the input sentence. In particular, we propose a flexible neural structure that applies a BiLSTM-based feature extractor for short sentences and a Transformer-based feature extractor for long sentences. To the best of our knowledge, LaNN is the first neural network structure that can auto-select the feature extraction mechanism based on the input. We quantify the improvements of LaNN against several competitive baselines on the public WikiQA dataset, showing significant improvements over the state-of-the-art