8,569 research outputs found
QDEE: Question Difficulty and Expertise Estimation in Community Question Answering Sites
In this paper, we present a framework for Question Difficulty and Expertise
Estimation (QDEE) in Community Question Answering sites (CQAs) such as Yahoo!
Answers and Stack Overflow, which tackles a fundamental challenge in
crowdsourcing: how to appropriately route and assign questions to users with
the suitable expertise. This problem domain has been the subject of much
research and includes both language-agnostic as well as language conscious
solutions. We bring to bear a key language-agnostic insight: that users gain
expertise and therefore tend to ask as well as answer more difficult questions
over time. We use this insight within the popular competition (directed) graph
model to estimate question difficulty and user expertise by identifying key
hierarchical structure within said model. An important and novel contribution
here is the application of "social agony" to this problem domain. Difficulty
levels of newly posted questions (the cold-start problem) are estimated by
using our QDEE framework and additional textual features. We also propose a
model to route newly posted questions to appropriate users based on the
difficulty level of the question and the expertise of the user. Extensive
experiments on real world CQAs such as Yahoo! Answers and Stack Overflow data
demonstrate the improved efficacy of our approach over contemporary
state-of-the-art models. The QDEE framework also allows us to characterize user
expertise in novel ways by identifying interesting patterns and roles played by
different users in such CQAs.Comment: Accepted in the Proceedings of the 12th International AAAI Conference
on Web and Social Media (ICWSM 2018). June 2018. Stanford, CA, US
Report on the Second International Workshop on the Evaluation on Collaborative Information Seeking and Retrieval (ECol'2017 @ CHIIR)
The 2nd workshop on the evaluation of collaborative information retrieval and seeking (ECol) was held in conjunction with the ACM SIGIR Conference on Human Information Interaction & Retrieval (CHIIR) in Oslo, Norway. The workshop focused on discussing the challenges and difficulties of researching and studying collaborative information retrieval and seeking (CIS/CIR). After an introductory and scene setting overview of developments in CIR/CIS, participants were challenged with devising a range of possible CIR/CIS tasks that could be used for evaluation purposes. Through the brainstorming and discussions, valuable insights regarding the evaluation of CIR/CIS tasks become apparent ? for particular tasks efficiency and/or effectiveness is most important, however for the majority of tasks the success and quality of outcomes along with knowledge sharing and sense-making were most important ? of which these latter attributes are much more difficult to measure and evaluate. Thus the major challenge for CIR/CIS research is to develop methods, measures and methodologies to evaluate these high order attributes
Cultures in Community Question Answering
CQA services are collaborative platforms where users ask and answer
questions. We investigate the influence of national culture on people's online
questioning and answering behavior. For this, we analyzed a sample of 200
thousand users in Yahoo Answers from 67 countries. We measure empirically a set
of cultural metrics defined in Geert Hofstede's cultural dimensions and Robert
Levine's Pace of Life and show that behavioral cultural differences exist in
community question answering platforms. We find that national cultures differ
in Yahoo Answers along a number of dimensions such as temporal predictability
of activities, contribution-related behavioral patterns, privacy concerns, and
power inequality.Comment: Published in the proceedings of the 26th ACM Conference on Hypertext
and Social Media (HT'15
Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration
International audienceIn this paper, we specifically consider the challenging task of solving a question posted on Twitter. The latter generally remains unanswered and most of the replies, if any, are only from members of the questioner's neighborhood. As outlined in previous work related to community Q&A, we believe that question-answering is a collaborative process and that the relevant answer to a question post is an aggregation of answer nuggets posted by a group of relevant users. Thus, the problem of identifying the relevant answer turns into the problem of identifying the right group of users who would provide useful answers and would possibly be willing to collaborate together in the long-term. Accordingly, we present a novel method, called CRAQ, that is built on the collaboration paradigm and formulated as a group entropy optimization problem. To optimize the quality of the group, an information gain measure is used to select the most likely " informative " users according to topical and collaboration likelihood predictive features. Crowd-based experiments performed on two crisis-related Twitter datasets demonstrate the effectiveness of our collaborative-based answering approach
Simplifying Sparse Expert Recommendation by Revisiting Graph Diffusion
Community Question Answering (CQA) websites have become valuable knowledge
repositories where individuals exchange information by asking and answering
questions. With an ever-increasing number of questions and high migration of
users in and out of communities, a key challenge is to design effective
strategies for recommending experts for new questions. In this paper, we
propose a simple graph-diffusion expert recommendation model for CQA, that can
outperform state-of-the art deep learning representatives and collaborative
models. Our proposed method learns users' expertise in the context of both
semantic and temporal information to capture their changing interest and
activity levels with time. Experiments on five real-world datasets from the
Stack Exchange network demonstrate that our approach outperforms competitive
baseline methods. Further, experiments on cold-start users (users with a
limited historical record) show our model achieves an average of ~ 30%
performance gain compared to the best baseline method
Predicting Answering Behaviour in Online Question Answering Communities
The value of Question Answering (Q&A) communities is de- pendent on members of the community finding the questions they are most willing and able to answer. This can be diffi- cult in communities with a high volume of questions. Much previous has work attempted to address this problem by recommending questions similar to those already answered. However, this approach disregards the question selection behaviour of the answers and how it is affected by factors such as question recency and reputation. In this paper, we identify the parameters that correlate with such a behaviour by analysing the users’ answering patterns in a Q&A com- munity. We then generate a model to predict which question a user is most likely to answer next. We train Learning to Rank (LTR) models to predict question selections using various user, question and thread feature sets. We show that answering behaviour can be predicted with a high level of success, and highlight the particular features that influence users’ question selections
Social Search: retrieving information in Online Social Platforms -- A Survey
Social Search research deals with studying methodologies exploiting social
information to better satisfy user information needs in Online Social Media
while simplifying the search effort and consequently reducing the time spent
and the computational resources utilized. Starting from previous studies, in
this work, we analyze the current state of the art of the Social Search area,
proposing a new taxonomy and highlighting current limitations and open research
directions. We divide the Social Search area into three subcategories, where
the social aspect plays a pivotal role: Social Question&Answering, Social
Content Search, and Social Collaborative Search. For each subcategory, we
present the key concepts and selected representative approaches in the
literature in greater detail. We found that, up to now, a large body of studies
model users' preferences and their relations by simply combining social
features made available by social platforms. It paves the way for significant
research to exploit more structured information about users' social profiles
and behaviors (as they can be inferred from data available on social platforms)
to optimize their information needs further
Adapting Visual Question Answering Models for Enhancing Multimodal Community Q&A Platforms
Question categorization and expert retrieval methods have been crucial for
information organization and accessibility in community question & answering
(CQA) platforms. Research in this area, however, has dealt with only the text
modality. With the increasing multimodal nature of web content, we focus on
extending these methods for CQA questions accompanied by images. Specifically,
we leverage the success of representation learning for text and images in the
visual question answering (VQA) domain, and adapt the underlying concept and
architecture for automated category classification and expert retrieval on
image-based questions posted on Yahoo! Chiebukuro, the Japanese counterpart of
Yahoo! Answers.
To the best of our knowledge, this is the first work to tackle the
multimodality challenge in CQA, and to adapt VQA models for tasks on a more
ecologically valid source of visual questions. Our analysis of the differences
between visual QA and community QA data drives our proposal of novel
augmentations of an attention method tailored for CQA, and use of auxiliary
tasks for learning better grounding features. Our final model markedly
outperforms the text-only and VQA model baselines for both tasks of
classification and expert retrieval on real-world multimodal CQA data.Comment: Submitted for review at CIKM 201
- …