7 research outputs found

    Contributions of formal language theory to the study of dialogues

    Get PDF
    For more than 30 years, the problem of providing a formal framework for modeling dialogues has been a topic of great interest for the scientific areas of Linguistics, Philosophy, Cognitive Science, Formal Languages, Software Engineering and Artificial Intelligence. In the beginning the goal was to develop a "conversational computer", an automated system that could engage in a conversation in the same way as humans do. After studies showed the difficulties of achieving this goal Formal Language Theory and Artificial Intelligence have contributed to Dialogue Theory with the study and simulation of machine to machine and human to machine dialogues inspired by Linguistic studies of human interactions. The aim of our thesis is to propose a formal approach for the study of dialogues. Our work is an interdisciplinary one that connects theories and results in Dialogue Theory mainly from Formal Language Theory, but also from another areas like Artificial Intelligence, Linguistics and Multiprogramming. We contribute to Dialogue Theory by introducing a hierarchy of formal frameworks for the definition of protocols for dialogue interaction. Each framework defines a transition system in which dialogue protocols might be uniformly expressed and compared. The frameworks we propose are based on finite state transition systems and Grammar systems from Formal Language Theory and a multi-agent language for the specification of dialogue protocols from Artificial Intelligence. Grammar System Theory is a subfield of Formal Language Theory that studies how several (a finite number) of language defining devices (language processors or grammars) jointly develop a common symbolic environment (a string or a finite set of strings) by the application of language operations (for instance rewriting rules). For the frameworks we propose we study some of their formal properties, we compare their expressiveness, we investigate their practical application in Dialogue Theory and we analyze their connection with theories of human-like conversation from Linguistics. In addition we contribute to Grammar System Theory by proposing a new approach for the verification and derivation of Grammar systems. We analyze possible advantages of interpreting grammars as multiprograms that are susceptible of verification and derivation using the Owicki-Gries logic, a Hoare-based logic from the Multiprogramming field

    Toward an Argumentation-based Dialogue framework for Human-Robot Collaboration

    Full text link
    Successful human-robot collaboration with a common goal requires peer interaction in which humans and robots cooperate and complement each other\u27s expertise. Formal human-robot dialogue in which there is peer interaction is still in its infancy, though. My research recognizes three aspects of human-robot collaboration that call for dialogue: responding to discovery, pre-empting failure, and recovering from failure. In these scenarios the partners need the ability to challenge, persuade, exchange and expand beliefs about a joint action in order to collaborate through dialogue. My research identifies three argumentation-based dialogues: a persuasion dialogue to resolve disagreement, an information-seeking dialogue to expand individual knowledge, and an inquiry dialogue to share knowledge. A theoretical logic-based framework, a formalized dialogue protocol based on argumentation theory, and argumentation-based dialogue games were developed to provide dialogue support for peer interaction. The work presented in this thesis is the first to apply argumentation theory and three different logic-based argumentation dialogues for use in human-robot collaboration. The research presented in this thesis demonstrates a practical, real-time implementation in which persuasion, inquiry, and information-seeking dialogues are applied to shared decision making for human-robot collaboration in a treasure hunt game domain. My research investigates if adding peer interaction enabled through argumentation-based dialogue to an HRI system improves system performance and user experience during a collaborative task when compared to an HRI system that is capable of only supervisory interaction with minimal dialogue. Results from user studies in physical and simulated human-robot collaborative environments, which involved 108 human participants who interacted with a robot as peer and supervisor, are presented in this thesis. My research contributes to both the human-robot interaction (HRI) and the argumentation communities. First, it brings into HRI a structured method for a robot to maintain its beliefs, to reason using those beliefs, and to interact with a human as a peer via argumentation-based dialogues. The structured method allows the human-robot collaborators to share beliefs, respond to discovery, expand beliefs to recover from failure, challenge beliefs, or resolve conflicts by persuasion. It allows a robot to challenge a human or a human to challenge a robot to prevent human or robot errors. Third, my research provides a comprehensive subjective and objective analysis of the effectiveness of an HRI System with peer interaction that is enabled through argumentation-based dialogue. I compare this peer interaction to a system that is capable of only supervisory interaction with minimal dialogue. My research contributes to the harder questions for human-robot collaboration: what kind of human-robot dialogue support can enhance peer-interaction? How can we develop models to formalize those features? How can we ensure that those features really help, and how do they help? Human-robot dialogue that can aid shared decision making, support the expansion of individual or shared knowledge, and resolve disagreements between collaborative human-robot teams will be much sought after as human society transitions from a world of robot-as-a-tool to robot-as-a-partner. My research presents a version of peer interaction enabled through argumentation-based dialogue that allows humans and robots to work together as partners

    Argumentation-based methods for multi-perspective cooperative planning

    Get PDF
    Through cooperation, agents can transcend their individual capabilities and achieve goals that would be unattainable otherwise. Existing multiagent planning work considers each agent’s action capabilities, but does not account for distributed knowledge and the incompatible views agents may have of the planning domain. These divergent views can be a result of faulty sensors, local and incomplete knowledge, and outdated information, or simply because each agent has conducted different inferences and their beliefs are not aligned. This thesis is concerned with Multi-Perspective Cooperative Planning (MPCP), the problem of synthesising a plan for multiple agents which share a goal but hold different views about the state of the environment and the specification of the actions they can perform to affect it. Reaching agreement on a mutually acceptable plan is important, since cautious autonomous agents will not subscribe to plans that they individually believe to be inappropriate or even potentially hazardous. We specify the MPCP problem by adapting standard set-theoretic planning notation. Based on argumentation theory we define a new notion of plan acceptability, and introduce a novel formalism that combines defeasible logic programming and situation calculus that enables the succinct axiomatisation of contradictory planning theories and allows deductive argumentation-based inference. Our work bridges research in argumentation, reasoning about action and classical planning. We present practical methods for reasoning and planning with MPCP problems that exploit the inherent structure of planning domains and efficient planning heuristics. Finally, in order to allow distribution of tasks, we introduce a family of argumentation-based dialogue protocols that enable the agents to reach agreement on plans in a decentralised manner. Based on the concrete foundation of deductive argumentation we analytically investigate important properties of our methods illustrating the correctness of the proposed planning mechanisms. We also empirically evaluate the efficiency of our algorithms in benchmark planning domains. Our results illustrate that our methods can synthesise acceptable plans within reasonable time in large-scale domains, while maintaining a level of expressiveness comparable to that of modern automated planning

    Argumentation-based dialogues for deliberation

    No full text
    This paper presents an argumentation-based approach to deliberation, the process by which two or more agents reach a consensus on a course of action. The kind of deliberation we are interested in combines both the selection of an overall goal, the reduction of this goal into sub-goals, and the formation of a plan to achieve the overall goal. We develop a mechanism for doing this and then proceed to describe how it can be integrated into a system of argumentation to provide a sound and complete deliberation system, before showing how the same process can be achieved through a multi-agent dialogue
    corecore