7,206 research outputs found

    Asking the Right Question at the Right Time: Human and Model Uncertainty Guidance to Ask Clarification Questions

    Full text link
    Clarification questions are an essential dialogue tool to signal misunderstanding, ambiguities, and under-specification in language use. While humans are able to resolve uncertainty by asking questions since childhood, modern dialogue systems struggle to generate effective questions. To make progress in this direction, in this work we take a collaborative dialogue task as a testbed and study how model uncertainty relates to human uncertainty -- an as yet under-explored problem. We show that model uncertainty does not mirror human clarification-seeking behavior, which suggests that using human clarification questions as supervision for deciding when to ask may not be the most effective way to resolve model uncertainty. To address this issue, we propose an approach to generating clarification questions based on model uncertainty estimation, compare it to several alternatives, and show that it leads to significant improvements in terms of task success. Our findings highlight the importance of equipping dialogue systems with the ability to assess their own uncertainty and exploit in interaction.Comment: Accepted at EACL 202

    Incorporating Learner Emotions through Sentiment Analysis in Adaptive E-learning Systems: A Pilot Study

    Get PDF
    This research delves into the exciting avenue of incorporating learner emotions into adaptive E-learning systems through sentiment analysis techniques. Utilizing a pilot study with 40 undergraduate computer science students, we investigated the ability of an adaptive system to detect boredom and frustration in learner forum posts and subsequently personalize content or offer support based on these emotional states. This approach proved demonstrably successful, as learners in the experimental group who received emotion-based adaptation exhibited both increased engagement (reflected in higher time spent on tasks) and improved learning outcomes (evidenced by higher post-test scores). Furthermore, qualitative feedback revealed positive responses to the personalized interventions, indicating that learners appreciated the tailored support provided by the system. While acknowledging limitations such as the small sample size and single subject area, this study firmly establishes the promising potential of emotion-aware adaptive systems. By addressing the emotional dynamics of the learning process, such systems can pave the way for truly personalized and responsive E-learning environments that cater to individual learner needs and foster deeper engagement, positive learning experiences, and ultimately, success for all students

    Follow-up question handling in the IMIX and Ritel systems: A comparative study

    Get PDF
    One of the basic topics of question answering (QA) dialogue systems is how follow-up questions should be interpreted by a QA system. In this paper, we shall discuss our experience with the IMIX and Ritel systems, for both of which a follow-up question handling scheme has been developed, and corpora have been collected. These two systems are each other's opposites in many respects: IMIX is multimodal, non-factoid, black-box QA, while Ritel is speech, factoid, keyword-based QA. Nevertheless, we will show that they are quite comparable, and that it is fruitful to examine the similarities and differences. We shall look at how the systems are composed, and how real, non-expert, users interact with the systems. We shall also provide comparisons with systems from the literature where possible, and indicate where open issues lie and in what areas existing systems may be improved. We conclude that most systems have a common architecture with a set of common subtasks, in particular detecting follow-up questions and finding referents for them. We characterise these tasks using the typical techniques used for performing them, and data from our corpora. We also identify a special type of follow-up question, the discourse question, which is asked when the user is trying to understand an answer, and propose some basic methods for handling it
    • …
    corecore