506,093 research outputs found
Assessing User Expertise in Spoken Dialog System Interactions
Identifying the level of expertise of its users is important for a system
since it can lead to a better interaction through adaptation techniques.
Furthermore, this information can be used in offline processes of root cause
analysis. However, not much effort has been put into automatically identifying
the level of expertise of an user, especially in dialog-based interactions. In
this paper we present an approach based on a specific set of task related
features. Based on the distribution of the features among the two classes -
Novice and Expert - we used Random Forests as a classification approach.
Furthermore, we used a Support Vector Machine classifier, in order to perform a
result comparison. By applying these approaches on data from a real system,
Let's Go, we obtained preliminary results that we consider positive, given the
difficulty of the task and the lack of competing approaches for comparison.Comment: 10 page
QDEE: Question Difficulty and Expertise Estimation in Community Question Answering Sites
In this paper, we present a framework for Question Difficulty and Expertise
Estimation (QDEE) in Community Question Answering sites (CQAs) such as Yahoo!
Answers and Stack Overflow, which tackles a fundamental challenge in
crowdsourcing: how to appropriately route and assign questions to users with
the suitable expertise. This problem domain has been the subject of much
research and includes both language-agnostic as well as language conscious
solutions. We bring to bear a key language-agnostic insight: that users gain
expertise and therefore tend to ask as well as answer more difficult questions
over time. We use this insight within the popular competition (directed) graph
model to estimate question difficulty and user expertise by identifying key
hierarchical structure within said model. An important and novel contribution
here is the application of "social agony" to this problem domain. Difficulty
levels of newly posted questions (the cold-start problem) are estimated by
using our QDEE framework and additional textual features. We also propose a
model to route newly posted questions to appropriate users based on the
difficulty level of the question and the expertise of the user. Extensive
experiments on real world CQAs such as Yahoo! Answers and Stack Overflow data
demonstrate the improved efficacy of our approach over contemporary
state-of-the-art models. The QDEE framework also allows us to characterize user
expertise in novel ways by identifying interesting patterns and roles played by
different users in such CQAs.Comment: Accepted in the Proceedings of the 12th International AAAI Conference
on Web and Social Media (ICWSM 2018). June 2018. Stanford, CA, US
Recommended from our members
The role of human factors in stereotyping behavior and perception of digital library users: A robust clustering approach
To deliver effective personalization for digital library users, it is necessary to identify which human factors are most relevant in determining the behavior and perception of these users. This paper examines three key human factors: cognitive styles, levels of expertise and gender differences, and utilizes three individual clustering techniques: k-means, hierarchical clustering and fuzzy clustering to understand user behavior and perception. Moreover, robust clustering, capable of correcting the bias of individual clustering techniques, is used to obtain a deeper understanding. The robust clustering approach produced results that highlighted the relevance of cognitive style for user behavior, i.e., cognitive style dominates and justifies each of the robust clusters created. We also found that perception was mainly determined by the level of expertise of a user. We conclude that robust clustering is an effective technique to analyze user behavior and perception
Collaborative assessment of information provider's reliability and expertise using subjective logic
Q&A social media have gained a lot of attention during the recent years. People rely on these sites to obtain information due to a number of advantages they offer as compared to conventional sources of knowledge (e.g., asynchronous and convenient access). However, for the same question one may find highly contradicting answers, causing an ambiguity with respect to the correct information. This can be attributed to the presence of unreliable and/or non-expert users. These two attributes (reliability and expertise) significantly affect the quality of the answer/information provided. We present a novel approach for estimating these user's characteristics relying on human cognitive traits. In brief, we propose each user to monitor the activity of her peers (on the basis of responses to questions asked by her) and observe their compliance with predefined cognitive models. These observations lead to local assessments that can be further fused to obtain a reliability and expertise consensus for every other user in the social network (SN). For the aggregation part we use subjective logic. To the best of our knowledge this is the first study of this kind in the context of Q&A SN. Our proposed approach is highly distributed; each user can individually estimate the expertise and the reliability of her peers using her direct interactions with them and our framework. The online SN (OSN), which can be considered as a distributed database, performs continuous data aggregation for users expertise and reliability assessment in order to reach a consensus. We emulate a Q&A SN to examine various performance aspects of our algorithm (e.g., convergence time, responsiveness etc.). Our evaluations indicate that it can accurately assess the reliability and the expertise of a user with a small number of samples and can successfully react to the latter's behavior change, provided that the cognitive traits hold in practice. © 2011 ICST
A Cognitive-based scheme for user reliability and expertise assessment in Q&A social networks
Q&A social media has gained a great deal of attention during recent years. People rely on these sites to obtain information due to the number of advantages they offer as compared to conventional sources of knowledge (e.g., asynchronous and convenient access). However, for the same question one may find highly contradictory answers, causing ambiguity with respect to the correct information. This can be attributed to the presence of unreliable and/or non-expert users. In this work, we propose a novel approach for estimating the reliability and expertise of a user based on human cognitive traits. Every user can individually estimate these values based on local pairwise interactions. We examine the convergence performance of our algorithm and we find that it can accurately assess the reliability and the expertise of a user and can successfully react to the latter's behavior change. © 2011 IEEE
Sticks, balls or a ribbon? Results of a formative user study with bioinformaticians
User interfaces in modern bioinformatics tools are designed for experts. They are too complicated for\ud
novice users such as bench biologists. This report presents the full results of a formative user study as part of a\ud
domain and requirements analysis to enhance user interfaces and collaborative environments for\ud
multidisciplinary teamwork. Contextual field observations, questionnaires and interviews with bioinformatics\ud
researchers of different levels of expertise and various backgrounds were performed in order to gain insight into\ud
their needs and working practices. The analysed results are presented as a user profile description and user\ud
requirements for designing user interfaces that support the collaboration of multidisciplinary research teams in\ud
scientific collaborative environments. Although the number of participants limits the generalisability of the\ud
findings, the combination of recurrent observations with other user analysis techniques in real-life settings\ud
makes the contribution of this user study novel
- …