1 research outputs found
SemEval-2016 Task 3: Community Question Answering
This paper describes the SemEval--2016 Task 3 on Community Question
Answering, which we offered in English and Arabic. For English, we had three
subtasks: Question--Comment Similarity (subtask A), Question--Question
Similarity (B), and Question--External Comment Similarity (C). For Arabic, we
had another subtask: Rerank the correct answers for a new question (D).
Eighteen teams participated in the task, submitting a total of 95 runs (38
primary and 57 contrastive) for the four subtasks. A variety of approaches and
features were used by the participating systems to address the different
subtasks, which are summarized in this paper. The best systems achieved an
official score (MAP) of 79.19, 76.70, 55.41, and 45.83 in subtasks A, B, C, and
D, respectively. These scores are significantly better than those for the
baselines that we provided. For subtask A, the best system improved over the
2015 winner by 3 points absolute in terms of Accuracy.Comment: community question answering, question-question similarity,
question-comment similarity, answer reranking, English, Arabic. arXiv admin
note: substantial text overlap with arXiv:1912.0073