2,934 research outputs found

    On the role of dialogue models in the age of large language models.

    Get PDF
    We argue that Machine learning, in particular the currently prevalent generation of Large Language Models (LLMs), can work constructively with existing normative models of dialogue as exemplified by dialogue games, specifically their computational applications within, for example, inter-agent communication and automated dialogue management. Furthermore we argue that this relationship is bi-directional, that some uses of dialogue games benefit from increased functionality due to the specific capabilities of LLMs, whilst LLMs benefit from externalised models of, variously, problematic, normative, or idealised behaviour. Machine Learning (ML) approaches, especially LLMs , appear to be making great advances against long-standing Artificial Intelligence challenges. In particular, LLMs are increasingly achieving successes in areas both adjacent to, and overlapping with, those of interest to the Computational Models of Natural Argument community. A prevalent opinion, not without some basis, within the ML research community is that many, if not all, AI challenges, will eventually be solved by ML models of increasing power and utility, negating the need for alternative or traditional approaches. An exemplar of this position, is the study of distinct models of dialogue for inter-agent communication when LLM based chatbots are increasingly able to surpass their performance in specific contexts. The trajectory of increased LLM capabilities suggests no reason that this trend will not continue, at least for some time. However, it is not the case that only the one, or the other approach, is necessary. Despite a tendency for LLMs to feature creep, and to appear to subsume additional areas of study, there are very good reasons to consider three modes of study of dialogue. Firstly, LLMs as their own individual field within ML, secondly, dialogue both in terms of actual human behaviour, which can exhibit wide quality standards, but also in terms of normative and idealised models, and thirdly, the fertile area in which the two overlap and can operate collaboratively. It is this third aspect with which this paper is concerned, for the first will occur anyway as researchers seek to map out the boundaries of what LLMs, as AI models, can actually achieve, and the second will continue, because the study of how people interact naturally through argument and dialogue will remain both fascinating and of objective value regardless of advances made in LLMs. However, where LLMs, Dialogue Models, and, for completion, people, come together, there is fertile ground for the development of principled models of interaction that are well-founded, well-regulated, and supportive of mixed-initiative interactions between humans and intelligent software agents

    Agents united:An open platform for multi-agent conversational systems

    Get PDF
    The development of applications with intelligent virtual agents (IVA) often comes with integration of multiple complex components. In this article we present the Agents United Platform: an open source platform that researchers and developers can use as a starting point to setup their own multi-IVA applications. The new platform provides developers with a set of integrated components in a sense-remember-think-act architecture. Integrated components are a sensor framework, memory component, Topic Selection Engine, interaction manager (Flipper), two dialogue execution engines, and two behaviour realisers (ASAP and GRETA) of which the agents can seamlessly interact with each other. This article discusses the platform and its individual components. It also highlights some of the novelties that arise from the integration of components and elaborates on directions for future work

    A Critical Discussion Game for Prohibiting Fallacies

    Get PDF
    The study of fallacies is at the heart of argumentation studies. In response to Hamblin’s devastating critique of the state of the theory of fallacies in 1970, both formal dialectical and informal approaches to fallacies developed. In the current paper, we focus on an influential informal approach to fallacies, part of the pragma-dialectical theory of argumentation. Central to the pragma-dialectical method for analysing and evaluating argumentative discourse is the ideal model of a critical discussion. In this discussion model, a dialectical perspective on argumentation is combined with a pragmatic take on communicative interaction. By formalising and computationally implementing the model of a critical discussion, we take a first step in the development of software to computationally model argumentative dialogue in which fallacies are prohibited along the pragmadialectical norms. We do this by defining the Critical Discussion Game, a formal dialogue game based on the pragma-dialectical discussion model, executable on an online user-interface which is part of a larger infrastructure of argumentation software

    Debating Technology for Dialogical Argument:Sensemaking, Engagement and Analytics

    Get PDF
    Debating technologies, a newly emerging strand of research into computational technologies to support human debating, offer a powerful way of providing naturalistic, dialogue-based interaction with complex information spaces. The full potential of debating technologies for dialogical argument can, however, only be realized once key technical and engineering challenges are overcome, namely data structure, data availability, and interoperability between components. Our aim in this article is to show that the Argument Web, a vision for integrated, reusable, semantically rich resources connecting views, opinions, arguments, and debates online, offers a solution to these challenges. Through the use of a running example taken from the domain of citizen dialogue, we demonstrate for the first time that different Argument Web components focusing on sensemaking, engagement, and analytics can work in concert as a suite of debating technologies for rich, complex, dialogical argument
    • …
    corecore