8 research outputs found

    Cultures in Community Question Answering

    Full text link
    CQA services are collaborative platforms where users ask and answer questions. We investigate the influence of national culture on people's online questioning and answering behavior. For this, we analyzed a sample of 200 thousand users in Yahoo Answers from 67 countries. We measure empirically a set of cultural metrics defined in Geert Hofstede's cultural dimensions and Robert Levine's Pace of Life and show that behavioral cultural differences exist in community question answering platforms. We find that national cultures differ in Yahoo Answers along a number of dimensions such as temporal predictability of activities, contribution-related behavioral patterns, privacy concerns, and power inequality.Comment: Published in the proceedings of the 26th ACM Conference on Hypertext and Social Media (HT'15

    The Social World of Content Abusers in Community Question Answering

    Full text link
    Community-based question answering platforms can be rich sources of information on a variety of specialized topics, from finance to cooking. The usefulness of such platforms depends heavily on user contributions (questions and answers), but also on respecting the community rules. As a crowd-sourced service, such platforms rely on their users for monitoring and flagging content that violates community rules. Common wisdom is to eliminate the users who receive many flags. Our analysis of a year of traces from a mature Q&A site shows that the number of flags does not tell the full story: on one hand, users with many flags may still contribute positively to the community. On the other hand, users who never get flagged are found to violate community rules and get their accounts suspended. This analysis, however, also shows that abusive users are betrayed by their network properties: we find strong evidence of homophilous behavior and use this finding to detect abusive users who go under the community radar. Based on our empirical observations, we build a classifier that is able to detect abusive users with an accuracy as high as 83%.Comment: Published in the proceedings of the 24th International World Wide Web Conference (WWW 2015

    Identifying Impact Factors of Question Quality in Online Health Q&A Communities: an Empirical Analysis on MedHelp

    Get PDF
    Online health Q&A communities help patients, doctors and other users conveniently search and share healthcare information online and have gained much popularity all over the world. Good-quality questions that raise massive discussions could trigger users’ engagement online, which is beneficial for platform operation. However, little attention has been paid to the antecedents of question quality in online health Q&A communities. To have a deep investigation of healthcare question quality, this research aims to investigate the impact factors from two special aspects that are neglected in previous research, i.e., user’s structural influence and questions’ sentiment. Using a dataset collected from MedHelp, one of the largest online health Q&A communities, we found that users with high structural influences and questions with negative sentiment have positive associations with the answer number of questions. Our research would offer meaningful suggestions to platform managers and users

    Predicting best answerers for new questions: An approach leveraging topic modeling and collaborative voting

    Get PDF
    Workshop of Quality, Motivation and Coordination of Open Collaboration</p

    AUTOMATED QUESTION TRIAGE FOR SOCIAL REFERENCE: A STUDY OF ADOPTING DECISION FACTORS FROM DIGITAL REFERENCE

    Get PDF
    The increasing popularity of Social Reference (SR) services has enabled a corresponding growth in the number of users engaging in them as well as in the number of questions submitted to the services. However, the efficiency and quality of the services are being challenged because a large quantity of the questions have not been answered or satisfied for quite a long time. In this dissertation project, I propose using expert finding techniques to construct an automated Question Triage (QT) approach to resolve this problem. QT has been established in Digital Reference (DR) for some time, but it is not available in SR. This means designing an automated QT mechanism for SR is very innovative. In this project, I first examined important factors affecting triage decisions in DR, and extended this to the SR setting by investigating important factors affecting the decision making of QT in the SR setting. The study was conducted using question-answer pairs collected from Ask Metafilter, a popular SR site. For the evaluation, logistic regression analyses were conducted to examine which factors would significantly affect the performance of predicting relevant answerers to questions. The study results showed that the user’s answering activity is the most important factor affecting the triage decision of SR, followed by the user’s general performance in providing good answers and the degree of their interest in the question topic. The proposed algorithm, implementing these factors for identifying appropriate answerers to the given question, increased the performance of automated QT above the baseline for estimating relevant answerers to questions. The results of the current study have important implications for research and practice in automated QT for SR. Furthermore, the results will offer insights into designing user-participatory DR systems

    Understanding and exploiting user intent in community question answering

    Get PDF
    A number of Community Question Answering (CQA) services have emerged and proliferated in the last decade. Typical examples include Yahoo! Answers, WikiAnswers, and also domain-specific forums like StackOverflow. These services help users obtain information from a community - a user can post his or her questions which may then be answered by other users. Such a paradigm of information seeking is particularly appealing when the question cannot be answered directly by Web search engines due to the unavailability of relevant online content. However, question submitted to a CQA service are often colloquial and ambiguous. An accurate understanding of the intent behind a question is important for satisfying the user's information need more effectively and efficiently. In this thesis, we analyse the intent of each question in CQA by classifying it into five dimensions, namely: subjectivity, locality, navigationality, procedurality, and causality. By making use of advanced machine learning techniques, such as Co-Training and PU-Learning, we are able to attain consistent and significant classification improvements over the state-of-the-art in this area. In addition to the textual features, a variety of metadata features (such as the category where the question was posted to) are used to model a user's intent, which in turn help the CQA service to perform better in finding similar questions, identifying relevant answers, and recommending the most relevant answerers. We validate the usefulness of user intent in two different CQA tasks. Our first application is question retrieval, where we present a hybrid approach which blends several language modelling techniques, namely, the classic (query-likelihood) language model, the state-of-the-art translation-based language model, and our proposed intent-based language model. Our second application is answer validation, where we present a two-stage model which first ranks similar questions by using our proposed hybrid approach, and then validates whether the answer of the top candidate can be served as an answer to a new question by leveraging sentiment analysis, query quality assessment, and search lists validation
    corecore