12 research outputs found

    Avoiding resource conflicts in intelligent agents

    Get PDF
    An intelligent agent should be rational, in particular it should at least avoid pursuing goals which are definitely conflicting. In this paper we focus on resource conflict in agents that use a plan library organised around goals. We characterise different types of resources and define resource requirements summaries. We give algorithms for deriving resource requirements, using resource requirements to detect conflict, and maintaining dynamic updates of resource requirements. We also discuss ways of resolving resource conflict. Our approach does not represent time, rather it keeps resource summaries current. This enables an agent's decisions to be made on the basis of up-to-date information and allows us to develop efficient runtime (online) algorithms

    Reasoning about preferences in BDI agent systems

    Get PDF
    BDI agents often have to make decisions about which plan is used to achieve a goal, and in which order goals are to be achieved. In this paper we describe how to incorporate preferences (based on the LPP language) into the BDI execution model

    Towards quantifying the completeness of BDI goals

    Get PDF
    Often, such as in the presence of con icts, an agent must choose between multiple intentions. The level of complete- ness of the intentions can be a factor in this deliberation. We sketch a pragmatic but principled mechanism for quan- tifying the level of completeness of goals in a Belief-Desire- Intention{like agent. Our approach leverages previous work on resource and e ects summarization but we go beyond by accommodating both dynamic resource summaries and goal e ects, while also allowing a non-binary quanti cation of goal completeness

    Detecting and avoiding interference between goals in intelligent agents

    Get PDF
    Pro-active agents typically have multiple simultaneous goals. These may interact with each other both positively and negatively. In this paper we provide a mechanism allowing agents to detect and avoid a particular kind of negative interaction where the effects of one goal undo conditions that must be protected for successful pursuit of another goal. In order to detect such interactions we maintain summary information about the definite and potential conditional requirements and resulting effects of goals and their associated plans. We use these summaries to guard protected conditions by scheduling the execution of goals and plan steps. The algorithms and data structures developed allow agents to act rationally instead of blindly pursuing goals that will conflict

    An AgentSpeak meta-interpreter and its applications

    Get PDF
    A meta-interpreter for a language can provide an easy way of experimenting with modifications or extensions to a language. We give a meta-interpreter for the AgentSpeak language, prove its correctness, and show how the meta-interpreter can be used to extend the AgentSpeak language and to add features to the implementation

    Tracking reliability and helpfulness in agent interactions

    Get PDF
    A critical aspect of open systems such as the Internet is the interactions amongst the component agents of the system. Often this interaction is organised around social principles, in that one agent may request the help of another, and in turn may make a commitment to assist another when requested. In this paper we investigate two measures of the social responsibility of an agent known as reliability and helpfulness. Intuitively, reliability measures how good an agent is at keeping its commitments, and helpfulness measures how willing an agent is to make a commitment, when requested for help. We discuss these notions in the context of FIPA protocols. It is important to note that these measures are dependent only on the messages exchanged between the agents, and do not make any assumptions about the internal organisation of the agents. This means that these measures are both applicable to any variety of software agent, and externally verifiable, i.e. able to be calculated by anyone with access to the messages exchanged

    Reasoning about Goal-Plan Trees in Autonomous Agents: Development of Petri net and Constraint-Based Approaches with Resulting Performance Comparisons

    Get PDF
    Multi-agent systems and autonomous agents are becoming increasingly important in current computing technology. In many applications, the agents are often asked to achieve multiple goals individually or within teams where the distribution of these goals may be negotiated among the agents. It is expected that agents should be capable of working towards achieving all its currently adopted goals concurrently. However, in doing so, the goals can interact both constructively and destructively with each other, so a rational agent must be able to reason about these interactions and any other constraints that may be imposed on them, such as the limited availability of resources that could affect their ability to achieve all adopted goals when pursuing them concurrently. Currently, agent development languages require the developer to manually identify and handle these circumstances. In this thesis, we develop two approaches for reasoning about the interactions between the goals of an individual agent. The first of these employs Petri nets to represent and reason about the goals, while the second uses constraint satisfaction techniques to find efficient ways of achieving the goals. Three types of reasoning are incorporated into these models: reasoning about consumable resources where the availability of the resources is limited; the constructive interaction of goals whereby a single plan can be used to achieve multiple goals; and the interleaving of steps for achieving different goals that could cause one or more goals to fail. Experimental evaluation of the two approaches under various different circumstances highlights the benefits of the reasoning developed here whilst also identifying areas where one approach provides better results than the other. This can then be applied to suggest the underlying technique used to implement the reasoning that the agent may want to employ based on the goals it has been assigned

    Maintenance goals in intelligent agents

    Get PDF
    One popular software development strategy is that of intelligent agent systems. Agents are often programmed by goals; a programmer or user defines a set of goals for an agent, and then the agent is left to determine how best to complete the goals assigned to them. Popular types of goals are achievement and maintenance goals. An achievement goal describes some particular state the agent would like to bring about, for example, being in a particular location or having a particular bank balance. Given an achievement goal, an agent will perform actions that it believes will lead it to having the achievement goal realised. In current agent systems, maintenance goals tell an agent to ensure that some condition is always kept satisfied, for example, ensuring that a vehicle stays below a certain speed, or that it has sufficient fuel in its fuel tank. Currently, maintenance goals are reactive, in that they are not considered until after the maintenance condition has been violated. Only then does the agent begin to perform actions to restore the maintenance condition. In this thesis, we have discussed methods by which maintenance goals can be made proactive. Proactive maintenance goals may cause an agent to perform actions before a maintenance condition is violated, when it can predict that a maintenance condition will be violated in the future. This can be due to changes to the environment, or more interestingly, when the agent itself is performing actions that will cause the violation of the maintenance condition. Operational semantics that clearly demonstrate the functionality and operation of proactive maintenance goals have been developed in this thesis. We have experimentally shown that agents with proactive maintenance goals will reduce the amount of resources consumed in a variety of error-prone environments. This includes scenarios where the agent's beliefs are less than the true values, as well as when the beliefs are in excess of the true values

    Real-time guarantees in high-level agent programming languages

    Get PDF
    In the thesis we present a new approach to providing soft real-time guarantees for Belief-Desire-Intention (BDI) agents. We analyse real-time guarantees for BDI agents and show how these can be achieved within a generic BDI programming framework. As an illustration of our approach, we develop a new agent architecture, called AgentSpeak(RT), and its associated programming language, which allows the development of real-time BDI agents. AgentSpeak(RT) extends AgentSpeak(L) [28] intentions with deadlines which specify the time by which the agent should respond to an event, and priorities which specify the relative importance of responding to a particular event. The AgentSpeak(RT) interpreter commits to a priority-maximal set of intentions: a set of intentions that is maximally feasible while preferring higher priority intentions. Real-time tasks can be freely mixed with tasks for which no deadline and/or priority has been specified, and if no deadlines and priorities are specified, the behavior of the agent defaults to that of a non real-time BDI agent. We perform a detailed case study of the use of AgentSpeak(RT) to demonstrate its advantages. This case study involves the development of an intelligent control system for a simple model of a nuclear power plant. We also prove some properties of the AgentSpeak(RT) architecture such as guaranteed reactivity delay of the AgentSpeak(RT) interpreter and probabilistic guarantees of successful execution of intentions by their deadlines. We extend the AgentSpeak(RT) architecture to allow the parallel execution of intentions. We present a multitasking approach to the parallel execution of intentions in the AgentSpeak(RT) architecture. We demonstrate advantages of parallel execution of intentions in AgentSpeak(RT) by showing how it improves behaviour of the intelligent control system for the nuclear power plant. We prove real-time guarantees of the extended AgentSpeak(RT) architecture. We present a characterisation of real-time task environments for an agent, and describe how it relates to AgentSpeak(RT) execution time profiles for a plan and an action. We also show a relationship between the estimated execution time of a plan in a particular environment and the syntactic complexity of an agent program
    corecore