4,917 research outputs found

    Steering Conversational Large Language Models for Long Emotional Support Conversations

    Full text link
    In this study, we address the challenge of consistently following emotional support strategies in long conversations by large language models (LLMs). We introduce the Strategy-Relevant Attention (SRA) metric, a model-agnostic measure designed to evaluate the effectiveness of LLMs in adhering to strategic prompts in emotional support contexts. By analyzing conversations within the Emotional Support Conversations dataset (ESConv) using LLaMA models, we demonstrate that SRA is significantly correlated with a model's ability to sustain the outlined strategy throughout the interactions. Our findings reveal that the application of SRA-informed prompts leads to enhanced strategic adherence, resulting in conversations that more reliably exhibit the desired emotional support strategies over longer conversations. Furthermore, we contribute a comprehensive, multi-branch synthetic conversation dataset for ESConv, featuring a variety of strategy continuations informed by our optimized prompting method. The code and data are publicly available on our Github

    The BURCHAK corpus: a Challenge Data Set for Interactive Learning of Visually Grounded Word Meanings

    Full text link
    We motivate and describe a new freely available human-human dialogue dataset for interactive learning of visually grounded word meanings through ostensive definition by a tutor to a learner. The data has been collected using a novel, character-by-character variant of the DiET chat tool (Healey et al., 2003; Mills and Healey, submitted) with a novel task, where a Learner needs to learn invented visual attribute words (such as " burchak " for square) from a tutor. As such, the text-based interactions closely resemble face-to-face conversation and thus contain many of the linguistic phenomena encountered in natural, spontaneous dialogue. These include self-and other-correction, mid-sentence continuations, interruptions, overlaps, fillers, and hedges. We also present a generic n-gram framework for building user (i.e. tutor) simulations from this type of incremental data, which is freely available to researchers. We show that the simulations produce outputs that are similar to the original data (e.g. 78% turn match similarity). Finally, we train and evaluate a Reinforcement Learning dialogue control agent for learning visually grounded word meanings, trained from the BURCHAK corpus. The learned policy shows comparable performance to a rule-based system built previously.Comment: 10 pages, THE 6TH WORKSHOP ON VISION AND LANGUAGE (VL'17

    From Events to Reactions: A Progress Report

    Full text link
    Syndicate is a new coordinated, concurrent programming language. It occupies a novel point on the spectrum between the shared-everything paradigm of threads and the shared-nothing approach of actors. Syndicate actors exchange messages and share common knowledge via a carefully controlled database that clearly scopes conversations. This approach clearly simplifies coordination of concurrent activities. Experience in programming with Syndicate, however, suggests a need to raise the level of linguistic abstraction. In addition to writing event handlers and managing event subscriptions directly, the language will have to support a reactive style of programming. This paper presents event-oriented Syndicate programming and then describes a preliminary design for augmenting it with new reactive programming constructs.Comment: In Proceedings PLACES 2016, arXiv:1606.0540

    From Language to Programs: Bridging Reinforcement Learning and Maximum Marginal Likelihood

    Full text link
    Our goal is to learn a semantic parser that maps natural language utterances into executable programs when only indirect supervision is available: examples are labeled with the correct execution result, but not the program itself. Consequently, we must search the space of programs for those that output the correct result, while not being misled by spurious programs: incorrect programs that coincidentally output the correct result. We connect two common learning paradigms, reinforcement learning (RL) and maximum marginal likelihood (MML), and then present a new learning algorithm that combines the strengths of both. The new algorithm guards against spurious programs by combining the systematic search traditionally employed in MML with the randomized exploration of RL, and by updating parameters such that probability is spread more evenly across consistent programs. We apply our learning algorithm to a new neural semantic parser and show significant gains over existing state-of-the-art results on a recent context-dependent semantic parsing task.Comment: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (2017

    Probleembesprekingen met samenwerkende kleuters

    Get PDF
    Problem conversations with young children during group workProblem conversations with intervening teachers may enhance children’sdiscourse during group work. The few studies focusing on these conversations, however, are normative and experimental in nature, neglecting the jointand sequential nature of these conversations. Therefore, this paper’s aim is togive insight into how teachers and a group of children constitute and continue problem conversations. Detailed analysis informed by ConversationAnalysis shows that problems are constructed in three patterns which aredistinguished by the teacher’s reaction to the problem initiation resulting inthree different continuations. It is found that the teacher’s actions are decisive for how children may contribute to the developing interaction. In particular, the teacher’s reaction to both problem initiations and solution proposals determines whether problem conversations interactions are jointlycontinued. Theoretical and practical implications are discussed

    Retrospective turn continuations in Mandarin Chinese conversation

    Get PDF
    How the status of further talk past the point of a turn's possible completion should he described, and what functions different kinds of turn continuation might serve - these are questions that have engaged many scholars since Sacks, Schegloff and Jefferson's turn-taking model (1974). In this paper, a general scheme is proposed with which one can tease out four interlocking strands in analyzing different kinds of turn continuation: Syntactic continuity vs. discontinuity, main vs. subordinate intonation, retrospective vs. prospective orientation, and information focus vs. non-focus. These parameters combine to form different configurations and interact in interesting ways, accounting for different kinds of turn continuation. The scheme is tested on, and illustrated with, a body of naturally occurring conversational data in Chinese.published_or_final_versio

    Overrated gaps: Inter-speaker gaps provide limited information about the timing of turns in conversation

    Get PDF
    Corpus analyses have shown that turn-taking in conversation is much faster than laboratory studies of speech planning would predict. To explain fast turn-taking, Levinson and Torreira (2015) proposed that speakers are highly proactive: They begin to plan a response to their interlocutor's turn as soon as they have understood its gist, and launch this planned response when the turn-end is imminent. Thus, fast turn-taking is possible because speakers use the time while their partner is talking to plan their own utterance. In the present study, we asked how much time upcoming speakers actually have to plan their utterances. Following earlier psycholinguistic work, we used transcripts of spoken conversations in Dutch, German, and English. These transcripts consisted of segments, which are continuous stretches of speech by one speaker. In the psycholinguistic and phonetic literature, such segments have often been used as proxies for turns. We found that in all three corpora, large proportions of the segments comprised of only one or two words, which on our estimate does not give the next speaker enough time to fully plan a response. Further analyses showed that speakers indeed often did not respond to the immediately preceding segment of their partner, but continued an earlier segment of their own. More generally, our findings suggest that speech segments derived from transcribed corpora do not necessarily correspond to turns, and the gaps between speech segments therefore only provide limited information about the planning and timing of turns
    • …
    corecore