73 research outputs found

    Strategic Responsibility Under Imperfect Information

    Get PDF
    A central issue in the specification and verification of autonomous agents and multiagent systems is the ascription of responsibility to individual agents and groups of agents. When designing a (multi)agent system, we must specify which agents or groups of agents are responsible for bringing about a particular state of affairs. Similarly, when verifying a multiagent system, we may wish to determine the responsibility of agents or groups of agents for a particular state of affairs, and the contribution of each agent to bringing about that state of affairs. In this paper, we discuss several aspects of responsibility, including strategic ability of agents, their epistemic properties, and their relationship to the evolution of the system behavior. We introduce a formal framework for reasoning about the responsibility of individual agents and agent groups in terms of the agents' strategies and epistemic properties, and state some properties of the framework

    Failure Handling in BDI Plans via Runtime Enforcement

    Get PDF
    This project CONVINCE has received funding from the European Union’s Horizon research and innovation programme G.A. n. 101070227. This publication is funded by the European Union. Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or European Commission (the granting authority). Neither the European Union nor the granting authority can be held responsible for themPublisher PD

    Agent programming in the cognitive era

    Get PDF
    It is claimed that, in the nascent ‘Cognitive Era’, intelligent systems will be trained using machine learning techniques rather than programmed by software developers. A contrary point of view argues that machine learning has limitations, and, taken in isolation, cannot form the basis of autonomous systems capable of intelligent behaviour in complex environments. In this paper, we explore the contributions that agent-oriented programming can make to the development of future intelligent systems. We briefly review the state of the art in agent programming, focussing particularly on BDI-based agent programming languages, and discuss previous work on integrating AI techniques (including machine learning) in agent-oriented programming. We argue that the unique strengths of BDI agent languages provide an ideal framework for integrating the wide range of AI capabilities necessary for progress towards the next-generation of intelligent systems. We identify a range of possible approaches to integrating AI into a BDI agent architecture. Some of these approaches, e.g., ‘AI as a service’, exploit immediate synergies between rapidly maturing AI techniques and agent programming, while others, e.g., ‘AI embedded into agents’ raise more fundamental research questions, and we sketch a programme of research directed towards identifying the most appropriate ways of integrating AI capabilities into agent programs

    AI for Social Impact: Learning and Planning in the Data-to-Deployment Pipeline

    Full text link
    With the maturing of AI and multiagent systems research, we have a tremendous opportunity to direct these advances towards addressing complex societal problems. In pursuit of this goal of AI for Social Impact, we as AI researchers must go beyond improvements in computational methodology; it is important to step out in the field to demonstrate social impact. To this end, we focus on the problems of public safety and security, wildlife conservation, and public health in low-resource communities, and present research advances in multiagent systems to address one key cross-cutting challenge: how to effectively deploy our limited intervention resources in these problem domains. We present case studies from our deployments around the world as well as lessons learned that we hope are of use to researchers who are interested in AI for Social Impact. In pushing this research agenda, we believe AI can indeed play an important role in fighting social injustice and improving society.Comment: To appear, AI Magazin

    Bang: a system for training and visualization in multi-agent team formation

    Get PDF
    In this demo participants will explore Bang, a system for multiagent team formation. Bang automatically selects exercises for training agents, and allows an operator to visualize the expected performance of possible teams, guiding in the agent selection process. Bang is used in the context of programming competitions, a real-world challenge that involves human teams, and significantly improved the performance of the teams of CEFET-MG University

    MAIDS - a Framework for the Development of Multi-Agent Intentional Dialogue Systems

    Get PDF
    This paper introduces a framework for programming highly sophisticated multi-agent dialogue systems. The framework is based on a multi-part agent belief base consisting of three components: (i) the main component is an extension of an agent-oriented programming belief base for representing defeasible knowledge and, in partic- ular, argumentation schemes; (ii) an ontology component where existing OWL ontologies can be instantiated; and (iii) a theory of mind component where agents keep track of mental attitudes they ascribe to other agents. The paper formalises a structured argumentation-based dialogue game where agents can “digress” from the main dialogue into subdialogues to discuss ontological or theory of mind issues. We provide an example of a dialogue with an ontological digression involving humans and agents, including a chatbot that we developed to support bed allocation in a hospital; we also comment on the initial evaluation of that chatbot carried out by domain experts. That example is also used to show that our framework supports all features of recent desiderata for future dialogue systems.This research was partially funded by CNPq, CAPES, FCT CEECIND /01997/2017 and UIDB/00057/2020
    corecore