8 research outputs found

    Answer Sequence Learning with Neural Networks for Answer Selection in Community Question Answering

    Full text link
    In this paper, the answer selection problem in community question answering (CQA) is regarded as an answer sequence labeling task, and a novel approach is proposed based on the recurrent architecture for this problem. Our approach applies convolution neural networks (CNNs) to learning the joint representation of question-answer pair firstly, and then uses the joint representation as input of the long short-term memory (LSTM) to learn the answer sequence of a question for labeling the matching quality of each answer. Experiments conducted on the SemEval 2015 CQA dataset shows the effectiveness of our approach.Comment: 6 page

    Health conversational system based on contextual matching of community-driven question-answer pairs

    Get PDF
    More and more people are turning to the World Wide Web for learning and sharing information about their health us- ing search engines, forums and question answering systems. In this demonstration, we look at a new way of deliver- ing health information to the end-users via coherent con- versations. The proposed conversational system allows the end-users to vaguely express and gradually refine their in- formation needs using only natural language questions or statements as input. We provide example scenarios in this demonstration to illustrate the inadequacies of current de- livery mechanisms and highlight the innovative aspects of the proposed conversational system

    Using Word Embeddings to Retrieve Semantically Similar Questions in Community Question Answering

    Get PDF
    International audienceThis paper focuses on question retrieval which is a crucial and tricky task in Community Question Answering (cQA). Question retrieval aims at finding historical questions that are semantically equivalent to the queried ones, assuming that the answers to the similar questions should also answer the new ones. The major challenges are the lexical gap problem as well as the verboseness in natural language. Most existing methods measure the similarity between questions based on the bag-of-words (BOWs) representation capturing no semantics between words. In this paper, we rely on word embeddings and TF-IDF for a meaningful vector representation of the questions. The similarity between questions is measured using cosine similarity based on their vector-based word representations. Experiments carried out on a real world data set from Yahoo! Answers show that our method is competetive

    Enhanced lexicon based models for extracting question-answer pairs from web forum

    Get PDF
    A Web forum is an online community that brings people in different geographical locations together. Members of the forum exchange ideas and expertise. As a result, a huge amount of contents on different topics are generated on a daily basis. The huge human generated contents of web forum can be mined as questionanswer pairs (Q&A). One of the major challenges in mining Q&A from web forum is to establish a good relationship between the question and the candidate answers. This problem is compounded by the noisy nature of web forum's human generated contents. Unfortunately, the existing methods that are used to mine knowledge from web forums ignore the effect of noise on the mining tools, making the lexical contents less effective. This study proposes lexicon based models that can automatically mine question-answer pairs with higher accuracy scores from web forum. The first phase of the research produces question mining model. It was implemented using features generated from unigram, bigram, forum metadata and simple rules. These features were screened using both chi-square and wrapper techniques. Wrapper generated features were used by Multinomial Naïve Bayes to finally build the model. The second phase produced a normalized lexical model for answer mining. It was implemented using 13 lexical features that cut across four quality dimensions. The performance of the features was enhanced by noise normalization, a process that fixed orthographic, phonetic and acronyms noises. The third phase of the research produced a hybridized model of lexical and non-lexical features. The average performances of the question mining model, normalized lexical model and hybridized model for answer mining were 90.3%, 97.5%, and 99.5% respectively on three data sets used. They outperformed all previous works in the domain. The first major contribution of the study is the development of an improved question mining model that is characterized by higher accuracy, better specificity, less complex and ability to generate good accuracy across different forum genres. The second contribution is the development of normalized lexical based model that has capability to establish good relationship between a question and its corresponding answer. The third contribution is the development of a hybridized model that integrates lexical features that guarantee relevance with non-lexical that guarantee quality to mine web forum answers. The fourth contribution is a novel integration of question and answer mining models to automatically generate question-answer pairs from web forum
    corecore