4 research outputs found
Answer Ranking for Product-Related Questions via Multiple Semantic Relations Modeling
Many E-commerce sites now offer product-specific question answering platforms
for users to communicate with each other by posting and answering questions
during online shopping. However, the multiple answers provided by ordinary
users usually vary diversely in their qualities and thus need to be
appropriately ranked for each question to improve user satisfaction. It can be
observed that product reviews usually provide useful information for a given
question, and thus can assist the ranking process. In this paper, we
investigate the answer ranking problem for product-related questions, with the
relevant reviews treated as auxiliary information that can be exploited for
facilitating the ranking. We propose an answer ranking model named MUSE which
carefully models multiple semantic relations among the question, answers, and
relevant reviews. Specifically, MUSE constructs a multi-semantic relation graph
with the question, each answer, and each review snippet as nodes. Then a
customized graph convolutional neural network is designed for explicitly
modeling the semantic relevance between the question and answers, the content
consistency among answers, and the textual entailment between answers and
reviews. Extensive experiments on real-world E-commerce datasets across three
product categories show that our proposed model achieves superior performance
on the concerned answer ranking task.Comment: Accepted by SIGIR 202
Answer Identification from Product Reviews for User Questions by Multi-Task Attentive Networks
Online Shopping has become a part of our daily routine, but it still cannot offer intuitive experience as store shopping. Nowadays, most e-commerce Websites offer a Question Answering (QA) system that allows users to consult other users who have purchased the product. However, users still need to wait patiently for others’ replies. In this paper, we investigate how to provide a quick response to the asker by plausible answer identification from product reviews. By analyzing the similarity and discrepancy between explicit answers and reviews that can be answers, a novel multi-task deep learning method with carefully designed attention mechanisms is developed. The method can well exploit large amounts of user generated QA data and a few manually labeled review data to address the problem. Experiments on data collected from Amazon demonstrate its effectiveness and superiority over competitive baselines