10 research outputs found

    Detecting and avoiding interference between goals in intelligent agents

    Get PDF
    Pro-active agents typically have multiple simultaneous goals. These may interact with each other both positively and negatively. In this paper we provide a mechanism allowing agents to detect and avoid a particular kind of negative interaction where the effects of one goal undo conditions that must be protected for successful pursuit of another goal. In order to detect such interactions we maintain summary information about the definite and potential conditional requirements and resulting effects of goals and their associated plans. We use these summaries to guard protected conditions by scheduling the execution of goals and plan steps. The algorithms and data structures developed allow agents to act rationally instead of blindly pursuing goals that will conflict

    Agents with a Moral Dimension (Doctoral Consortium)

    Get PDF
    ABSTRACT As argued by Categories and Subject Descriptors MORAL EMOTIONS Based on [9, 13] we argue that moral emotions are complex emotions involving cognitive processes. Given [9], we identify the following moral emotions : Pride, Self-reproach, Reproach, Admiration, Gratification, Gratitude, Anger and Remorse. In the field of Artificial Intelligence, we observed an increased interest in studying computational models of emotions; many computational models have been modele

    Action-level intention selection for BDI agents

    Get PDF
    Belief-Desire-Intention agents typically pursue multiple goals in parallel. However the interleaving of steps in different intentions may result in conflicts, e.g., where the execution of a step in one plan makes the execution of a step in another concurrently executing plan impossible. Previous approaches to avoiding conflicts between concurrently executing intentions treat plans as atomic units, and attempt to interleave plans in different intentions so as to minimise conflicts. However some conflicts cannot be resolved by appropriate ordering of plans and can only be resolved by appropriate interleaving of steps within plans. In this paper, we present SA, an approach to intention selection based on Single-Player Monte Carlo Tree Search that selects which intention to progress at the current cycle at the level of individual plan steps. We evaluate the performance of our approach in a range of scenarios of increasing difficulty in both static and dynamic environments. The results suggest SA out-performs existing approaches to intention selection both in terms of goals achieved and the variance in goal achievement time

    Action-level intention selection for BDI agents

    Get PDF
    Belief-Desire-Intention agents typically pursue multiple goals in parallel. However the interleaving of steps in different intentions may result in conflicts, e.g., where the execution of a step in one plan makes the execution of a step in another concurrently executing plan impossible. Previous approaches to avoiding conflicts between concurrently executing intentions treat plans as atomic units, and attempt to interleave plans in different intentions so as to minimise conflicts. However some conflicts cannot be resolved by appropriate ordering of plans and can only be resolved by appropriate interleaving of steps within plans. In this paper, we present SA, an approach to intention selection based on Single-Player Monte Carlo Tree Search that selects which intention to progress at the current cycle at the level of individual plan steps. We evaluate the performance of our approach in a range of scenarios of increasing difficulty in both static and dynamic environments. The results suggest SA out-performs existing approaches to intention selection both in terms of goals achieved and the variance in goal achievement time

    Programming Deliberation Strategies in Meta-APL

    Get PDF
    A key advantage of BDI-based agent programming is that agents can deliberate about which course of action to adopt to achieve a goal or respond to an event. However, while state-of-the-art BDI-based agent programming languages provide flexible support for expressing plans, they are typically limited to a single, hard-coded, deliberation strategy (perhaps with some parameterisation) for all task environments. In this paper, we present an alternative approach. We show how both agent programs and the agent’s deliberation strategy can be encoded in the agent programming language meta-APL. Key steps in the execution cycle of meta-APL are reflected in the state of the agent and can be queried and updated by meta-APL rules, allowing BDI deliberation strategies to be programmed with ease. To illustrate the flexibility of meta-APL, we show how three typical BDI deliberation strategies can be programmed using meta-APL rules. We then show how meta-APL can used to program a novel adaptive deliberation strategy that avoids interference between intentions

    Agent programming in the cognitive era

    Get PDF
    It is claimed that, in the nascent ‘Cognitive Era’, intelligent systems will be trained using machine learning techniques rather than programmed by software developers. A contrary point of view argues that machine learning has limitations, and, taken in isolation, cannot form the basis of autonomous systems capable of intelligent behaviour in complex environments. In this paper, we explore the contributions that agent-oriented programming can make to the development of future intelligent systems. We briefly review the state of the art in agent programming, focussing particularly on BDI-based agent programming languages, and discuss previous work on integrating AI techniques (including machine learning) in agent-oriented programming. We argue that the unique strengths of BDI agent languages provide an ideal framework for integrating the wide range of AI capabilities necessary for progress towards the next-generation of intelligent systems. We identify a range of possible approaches to integrating AI into a BDI agent architecture. Some of these approaches, e.g., ‘AI as a service’, exploit immediate synergies between rapidly maturing AI techniques and agent programming, while others, e.g., ‘AI embedded into agents’ raise more fundamental research questions, and we sketch a programme of research directed towards identifying the most appropriate ways of integrating AI capabilities into agent programs

    2APL: a practical agent programming language

    Full text link

    Design and implementation of a Multi-Agent Planning System

    Full text link
    This work introduces the design and implementation of a Multi-Agent Planning framework, in which a set of agents work jointly in order to devise a course of action to solve a certain planning problem.Torreño Lerma, A. (2011). Design and implementation of a Multi-Agent Planning System. http://hdl.handle.net/10251/15358Archivo delegad

    handling, declarative goals, and planning

    Get PDF
    A BDI agent programming language with failur

    Constrained Rationality: Formal Value-Driven Enterprise Knowledge Management Modelling and Analysis Framework for Strategic Business, Technology and Public Policy Decision Making & Conflict Resolution

    Get PDF
    The complexity of the strategic decision making environments, in which busi- nesses and governments live in, makes such decisions more and more difficult to make. People and organizations with access to the best known decision support modelling and analysis tools and methods cannot seem to benefit from such re- sources. We argue that the reason behind the failure of most current decision and game theoretic methods is that these methods are made to deal with operational and tactical decisions, not strategic decisions. While operational and tactical decisions are clear and concise with limited scope and short-term implications, allowing them to be easily formalized and reasoned about, strategic decisions tend to be more gen- eral, ill-structured, complex, with broader scope and long-term implications. This research work starts with a review of the current dominant modelling and analysis approaches, their strengths and shortcomings, and a look at how pioneers in the field criticize these approaches as restrictive and unpractical. Then, the work goes on to propose a new paradigm shift in how strategic decisions and conflicts should be modelled and analyzed. Constrained Rationality is a formal qualitative framework, with a robust method- ological approach, to model and analyze ill-structured strategic single and multi- agent decision making situations and conflicts. The framework brings back the strategic decision making problem to its roots, from being an optimization/efficiency problem about evaluating predetermined alternatives to satisfy predetermined pref- erences or utility functions, as most current decision and game theoretic approaches treats it, to being an effectiveness problem of: 1) identifying and modelling explic- itly the strategic and conflicting goals of the involved agents (also called players and decision makers in our work), and the decision making context (the external and internal constraints including the agents priorities, emotions and attitudes); 2) finding, uncovering and/or creating the right set of alternatives to consider; and then 3) reasoning about the ability of each of these alternatives to satisfy the stated strategic goals the agents have, given their constraints. Instead of assuming that the agents’ alternatives and preferences are well-known, as most current decision and game theoretic approaches do, the Constrained Rationality framework start by capturing and modelling clearly the context of the strategic decision making situation, and then use this contextual knowledge to guide the process of finding the agents’ alternatives, analyzing them, and choosing the most effective one. The Constrained Rationality framework, at its heart, provides a novel set of modelling facilities to capture the contextual knowledge of the decision making sit- uations. These modelling facilities are based on the Viewpoint-based Value-Driven - Enterprise Knowledge Management (ViVD-EKM) conceptual modelling frame- work proposed by Al-Shawa (2006b), and include facilities: to capture and model the goals and constraints of the different involved agents, in the decision making situation, in complex graphs within viewpoint models; and to model the complex cause-effect interrelationships among theses goals and constraints. The framework provides a set of robust, extensible and formal Goal-to-Goal and Constraint-to Goal relationships, through which qualitative linguistic value labels about the goals’ op- erationalization, achievement and prevention propagate these relationships until they are finalized to reflect the state of the goals’ achievement at any single point of time during the situation. The framework provides also sufficient, but extensible, representation facilities to model the agents’ priorities, emotional valences and attitudes as value properties with qualitative linguistic value labels. All of these goals and constraints, and the value labels of their respective value properties (operationalization, achievement, prevention, importance, emotional valence, etc.) are used to evaluate the different alternatives (options, plans, products, product/design features, etc.) agents have, and generate cardinal and ordinal preferences for the agents over their respective alternatives. For analysts, and decision makers alike, these preferences can easily be verified, validates and traced back to how much each of these alternatives con- tribute to each agent’s strategic goals, given his constraints, priorities, emotions and attitudes. The Constrained Rationality framework offers a detailed process to model and analyze decision making situations, with special paths and steps to satisfy the spe- cific needs of: 1) single-agent decision making situations, or multi-agent situations in which agents act in an individualistic manner with no regard to others’ current or future options and decisions; 2) collaborative multi-agent decision making situ- ations, where agents disclose their goals and constraints, and choose from a set of shared alternatives one that best satisfy the collective goals of the group; and 3) adversarial competitive multi-agent decision making situations (called Games, in gamete theory literature, or Conflicts, in the broader management science litera- ture). The framework’s modelling and analysis process covers also three types of con- flicts/games: a) non-cooperative games, where agents can take unilateral moves among the game’s states; b) cooperative games, with no coalitions allowed, where agents still act individually (not as groups/coalitions) taking both unilateral moves and cooperative single-step moves when it benefit them; and c) cooperative games, with coalitions allowed, where the games include, in addition to individual agents, agents who are grouped in formal alliances/coalitions, giving themselves the ability to take multi-step group moves to advance their collective position in the game. ...
    corecore