53 research outputs found

    Intentional dialogues in multi-agent systems based on ontologies and argumentation

    Get PDF
    Some areas of application, for example, healthcare, are known to resist the replacement of human operators by fully autonomous systems. It is typically not transparent to users how artificial intelligence systems make decisions or obtain information, making it difficult for users to trust them. To address this issue, we investigate how argumentation theory and ontology techniques can be used together with reasoning about intentions to build complex natural language dialogues to support human decision-making. Based on such an investigation, we propose MAIDS, a framework for developing multi-agent intentional dialogue systems, which can be used in different domains. Our framework is modular so that it can be used in its entirety or just the modules that fulfil the requirements of each system to be developed. Our work also includes the formalisation of a novel dialogue-subdialogue structure with which we can address ontological or theory-of-mind issues and later return to the main subject. As a case study, we have developed a multi-agent system using the MAIDS framework to support healthcare professionals in making decisions on hospital bed allocations. Furthermore, we evaluated this multi-agent system with domain experts using real data from a hospital. The specialists who evaluated our system strongly agree or agree that the dialogues in which they participated fulfil Cohen’s desiderata for task-oriented dialogue systems. Our agents have the ability to explain to the user how they arrived at certain conclusions. Moreover, they have semantic representations as well as representations of the mental state of the dialogue participants, allowing the formulation of coherent justifications expressed in natural language, therefore, easy for human participants to understand. This indicates the potential of the framework introduced in this thesis for the practical development of explainable intelligent systems as well as systems supporting hybrid intelligence

    Logic-based Technologies for Multi-agent Systems: A Systematic Literature Review

    Get PDF
    Precisely when the success of artificial intelligence (AI) sub-symbolic techniques makes them be identified with the whole AI by many non-computerscientists and non-technical media, symbolic approaches are getting more and more attention as those that could make AI amenable to human understanding. Given the recurring cycles in the AI history, we expect that a revamp of technologies often tagged as “classical AI” – in particular, logic-based ones will take place in the next few years. On the other hand, agents and multi-agent systems (MAS) have been at the core of the design of intelligent systems since their very beginning, and their long-term connection with logic-based technologies, which characterised their early days, might open new ways to engineer explainable intelligent systems. This is why understanding the current status of logic-based technologies for MAS is nowadays of paramount importance. Accordingly, this paper aims at providing a comprehensive view of those technologies by making them the subject of a systematic literature review (SLR). The resulting technologies are discussed and evaluated from two different perspectives: the MAS and the logic-based ones

    Plan Acquisition Through Intentional Learning in BDI Multi-Agent Systems

    Get PDF
    Multi-Agent Systems (MAS), a technique emanating from Distributed Artificial Intelligence, is a suitable technique to study complex systems. They make it possible to represent and simulate both elements and interrelations of systems in a variety of domains. The most commonly used approach to develop the individual components (agents) within MAS is reactive agency. However, other architectures, like cognitive agents, enable richer behaviours and interactions to be captured and modelled. The well-known Belief-Desire-Intentions architecture (BDI) is a robust approach to develop cognitive agents and it can emulate aspects of autonomous behaviour and is thus a promising tool to simulate social systems. Machine Learning has been applied to improve the behaviour of agents both individually or collectively. However, the original BDI model of agency, is lacking learning as part of its core functionalities. To cope with learning, the BDI agency has been extended by Intentional Learning (IL) operating at three levels: belief adjustment, plan selection, and plan acquisition. The latter makes it possible to increase the agent’s catalogue of skills by generating new procedural knowledge to be used onwards. The main contributions of this thesis are: a) the development of IL in a fully-fledged BDI framework at the plan acquisition level, b) extending IL from the single-agent case to the collective perspective; and c) a novel framework that melts reactive and BDI agents through integrating both MAS and Agent-Based Modelling approaches, it allows the configuration of diverse domains and environments. Learning is demonstrated in a test-bed environment to acquire a set of plans that drive the agent to exhibit behaviours such as target-searching and left-handed wall-following. Learning in both decision strata, single and collective, is tested in a more challenging and socially relevant environment: the Disaster-Rescue problem

    Hierarchical planning in BDI agent programming languages: A formal approach

    Get PDF
    This paper provides a general mechanism and a solid theoretical basis for performing planning within Belief-Desire-Intention (BDI) agents. BDI agent systems have emerged as one of the most widely used approaches to implementing intelligent behaviour in complex dynamic domains, in addition to which they have a strong theoretical background. However, these systems either do not include any built-in capacity for "lookahead" type of planning or they do it only at the implementation level without any precise defined semantics. In some situations, the ability to plan ahead is clearly desirable or even mandatory for ensuring success. Also, a precise definition of how planning can be integrated into a BDI system is highly desirable. By building on the underlying similarities between BDI systems and Hierarchical Task Network (HTN) planners, we present a formal semantics for a BDI agent programming language which cleanly incorporates HTN-style planning as a built-in feature. We argue that the resulting integrated agent programming language combines the advantages of both BDI agent systems and hierarchical offline planners

    Rational Agents: Prioritized Goals, Goal Dynamics, and Agent Programming Languages with Declarative Goals

    Get PDF
    I introduce a specification language for modeling an agent's prioritized goals and their dynamics. I use the situation calculus along with Reiter's solution to the frame problem and predicates for describing agents' knowledge as my base formalism. I further enhance this language by introducing a new sort of infinite paths. Within this language, I discuss how to systematically specify prioritized goals and how to precisely describe the effects of actions on these goals. These actions include adoption and dropping of goals and subgoals. In this framework, an agent's intentions are formally specified as the prioritized intersection of her goals. The ``prioritized'' qualifier above means that the specification must respect the priority ordering of goals when choosing between two incompatible goals. I ensure that the agent's intentions are always consistent with each other and with her knowledge. I investigate two variants with different commitment strategies. Agents specified using the ``optimizing'' agent framework always try to optimize their intentions, while those specified in the ``committed'' agent framework will stick to their intentions even if opportunities to commit to higher priority goals arise when these goals are incompatible with their current intentions. For these, I study properties of prioritized goals and goal change. I also give a definition of subgoals, and prove properties about the goal-subgoal relationship. As an application, I develop a model for a Simple Rational Agent Programming Language (SR-APL) with declarative goals. SR-APL is based on the ``committed agent'' variant of this rich theory, and combines elements from Belief-Desire-Intention (BDI) APLs and the situation calculus based ConGolog APL. Thus SR-APL supports prioritized goals and is grounded on a formal theory of goal change. It ensures that the agent's declarative goals and adopted plans are consistent with each other and with her knowledge. In doing this, I try to bridge the gap between agent theories and practical agent programming languages by providing a model and specification of an idealized BDI agent whose behavior is closer to what a rational agent does. I show that agents programmed in SR-APL satisfy some key rationality requirements

    Social Reasoning in Multi-Agent Systems with the Expectation-Strategy-Behaviour Framework

    Get PDF
    Multi-agent Systems (MAS) provide an increasingly relevant field of research due to their many applications to modelling real world situations where the behaviour of many individual, self-motivated, agents must be reasoned about and controlled. The problem of agent social reasoning is central to MAS, where an agent reasons about its actions and interactions with other agents. This is the most important component of MAS, as it is the interactions, cooperation and competition between agents that make MAS a powerful approach suited for tackling many complex problems. Existing work focuses either on specific types of social reasoning or general purpose agent practical reasoning - reasoning directed toward actions. This thesis argues that social reasoning should be considered separately from practical reasoning. There are many possible benefits to this separation compared to existing approaches. Principally, it can allow general algorithms for agent implementation, analysis and bounded reasoning. This viewpoint is motivated by the desire to implement social reasoning agents and allow for a more general theory of social reasoning in agents. This thesis presents the novel Expectation- Strategy-Behaviour (ESB) framework for social reasoning, which provides a generic way to specify and execute agent reasoning approaches. ESB is a powerful tool, allowing an agent designer to write expressive social reasoning specifications and have a computational model generated automatically. Through a formalism and description of an implemented reasoner based on this theory it is shown that it is possible and beneficial to implement a social reasoning engine as a complementary component to practical reasoning. By using ESB to specify, and then implement, existing social reasoning schemes for joint commitment and normative reasoning, the framework is shown to be a suitable general reasoner. Examples are provided of how reasoning can be bounded in an ESB agent and the mechanism to allow analysis of agent designs is discussed. Finally, there is discussion on the merits of the ESB solution and possible future work
    corecore