97,661 research outputs found

    A Perception Based, Domain Specific Expert System for Question-Answering Support

    Get PDF
    The current search engine technologies mostly use a keyword based searching mechanism, which does not have any deductive abilities. There is an urgent need for a more intelligent question-answering system that will provide a more intuitive, natural language interface, and more accurate and direct search results. The introduction of Computing with Words (CwW) provides a new theoretical base for developing frameworks with support for dealing with information in natural language. This paper proposes a domain specific question-answering system based on Fuzzy Expert Systems using CwW. In order to perform the translation of natural language based information into a standard format for use with CwW, Probabilistic Context-Free Grammar is used

    Supporting Answerers with Feedback in Social Q&A

    Full text link
    Prior research has examined the use of Social Question and Answer (Q&A) websites for answer and help seeking. However, the potential for these websites to support domain learning has not yet been realized. Helping users write effective answers can be beneficial for subject area learning for both answerers and the recipients of answers. In this study, we examine the utility of crowdsourced, criteria-based feedback for answerers on a student-centered Q&A website, Brainly.com. In an experiment with 55 users, we compared perceptions of the current rating system against two feedback designs with explicit criteria (Appropriate, Understandable, and Generalizable). Contrary to our hypotheses, answerers disagreed with and rejected the criteria-based feedback. Although the criteria aligned with answerers' goals, and crowdsourced ratings were found to be objectively accurate, the norms and expectations for answers on Brainly conflicted with our design. We conclude with implications for the design of feedback in social Q&A.Comment: Published in Proceedings of the Fifth Annual ACM Conference on Learning at Scale, Article No. 10, London, United Kingdom. June 26 - 28, 201

    Knowledge will Propel Machine Understanding of Content: Extrapolating from Current Examples

    Full text link
    Machine Learning has been a big success story during the AI resurgence. One particular stand out success relates to learning from a massive amount of data. In spite of early assertions of the unreasonable effectiveness of data, there is increasing recognition for utilizing knowledge whenever it is available or can be created purposefully. In this paper, we discuss the indispensable role of knowledge for deeper understanding of content where (i) large amounts of training data are unavailable, (ii) the objects to be recognized are complex, (e.g., implicit entities and highly subjective content), and (iii) applications need to use complementary or related data in multiple modalities/media. What brings us to the cusp of rapid progress is our ability to (a) create relevant and reliable knowledge and (b) carefully exploit knowledge to enhance ML/NLP techniques. Using diverse examples, we seek to foretell unprecedented progress in our ability for deeper understanding and exploitation of multimodal data and continued incorporation of knowledge in learning techniques.Comment: Pre-print of the paper accepted at 2017 IEEE/WIC/ACM International Conference on Web Intelligence (WI). arXiv admin note: substantial text overlap with arXiv:1610.0770

    How to Ask for Technical Help? Evidence-based Guidelines for Writing Questions on Stack Overflow

    Full text link
    Context: The success of Stack Overflow and other community-based question-and-answer (Q&A) sites depends mainly on the will of their members to answer others' questions. In fact, when formulating requests on Q&A sites, we are not simply seeking for information. Instead, we are also asking for other people's help and feedback. Understanding the dynamics of the participation in Q&A communities is essential to improve the value of crowdsourced knowledge. Objective: In this paper, we investigate how information seekers can increase the chance of eliciting a successful answer to their questions on Stack Overflow by focusing on the following actionable factors: affect, presentation quality, and time. Method: We develop a conceptual framework of factors potentially influencing the success of questions in Stack Overflow. We quantitatively analyze a set of over 87K questions from the official Stack Overflow dump to assess the impact of actionable factors on the success of technical requests. The information seeker reputation is included as a control factor. Furthermore, to understand the role played by affective states in the success of questions, we qualitatively analyze questions containing positive and negative emotions. Finally, a survey is conducted to understand how Stack Overflow users perceive the guideline suggestions for writing questions. Results: We found that regardless of user reputation, successful questions are short, contain code snippets, and do not abuse with uppercase characters. As regards affect, successful questions adopt a neutral emotional style. Conclusion: We provide evidence-based guidelines for writing effective questions on Stack Overflow that software engineers can follow to increase the chance of getting technical help. As for the role of affect, we empirically confirmed community guidelines that suggest avoiding rudeness in question writing.Comment: Preprint, to appear in Information and Software Technolog
    corecore