46 research outputs found

    Logic-Based Specification Languages for Intelligent Software Agents

    Full text link
    The research field of Agent-Oriented Software Engineering (AOSE) aims to find abstractions, languages, methodologies and toolkits for modeling, verifying, validating and prototyping complex applications conceptualized as Multiagent Systems (MASs). A very lively research sub-field studies how formal methods can be used for AOSE. This paper presents a detailed survey of six logic-based executable agent specification languages that have been chosen for their potential to be integrated in our ARPEGGIO project, an open framework for specifying and prototyping a MAS. The six languages are ConGoLog, Agent-0, the IMPACT agent programming language, DyLog, Concurrent METATEM and Ehhf. For each executable language, the logic foundations are described and an example of use is shown. A comparison of the six languages and a survey of similar approaches complete the paper, together with considerations of the advantages of using logic-based languages in MAS modeling and prototyping.Comment: 67 pages, 1 table, 1 figure. Accepted for publication by the Journal "Theory and Practice of Logic Programming", volume 4, Maurice Bruynooghe Editor-in-Chie

    Application of Hybrid Agents to Smart Energy Management of a Prosumer Node

    Get PDF
    We outline a solution to the problem of intelligent control of energy consumption of a smart building system by a prosumer planning agent that acts on the base of the knowledge of the system state and of a prediction of future states. Predictions are obtained by using a synthetic model of the system as obtained with a machine learning approach. We present case studies simulations implementing different instantiations of agents that control an air conditioner according to temperature set points dynamically chosen by the user. The agents are able of energy saving while trying to keep indoor temperature within a given comfort interval

    Rational Agents: Prioritized Goals, Goal Dynamics, and Agent Programming Languages with Declarative Goals

    Get PDF
    I introduce a specification language for modeling an agent's prioritized goals and their dynamics. I use the situation calculus along with Reiter's solution to the frame problem and predicates for describing agents' knowledge as my base formalism. I further enhance this language by introducing a new sort of infinite paths. Within this language, I discuss how to systematically specify prioritized goals and how to precisely describe the effects of actions on these goals. These actions include adoption and dropping of goals and subgoals. In this framework, an agent's intentions are formally specified as the prioritized intersection of her goals. The ``prioritized'' qualifier above means that the specification must respect the priority ordering of goals when choosing between two incompatible goals. I ensure that the agent's intentions are always consistent with each other and with her knowledge. I investigate two variants with different commitment strategies. Agents specified using the ``optimizing'' agent framework always try to optimize their intentions, while those specified in the ``committed'' agent framework will stick to their intentions even if opportunities to commit to higher priority goals arise when these goals are incompatible with their current intentions. For these, I study properties of prioritized goals and goal change. I also give a definition of subgoals, and prove properties about the goal-subgoal relationship. As an application, I develop a model for a Simple Rational Agent Programming Language (SR-APL) with declarative goals. SR-APL is based on the ``committed agent'' variant of this rich theory, and combines elements from Belief-Desire-Intention (BDI) APLs and the situation calculus based ConGolog APL. Thus SR-APL supports prioritized goals and is grounded on a formal theory of goal change. It ensures that the agent's declarative goals and adopted plans are consistent with each other and with her knowledge. In doing this, I try to bridge the gap between agent theories and practical agent programming languages by providing a model and specification of an idealized BDI agent whose behavior is closer to what a rational agent does. I show that agents programmed in SR-APL satisfy some key rationality requirements

    A SURVEY OF THE PROPERTIES OF AGENTS

    Get PDF
    In the past decade agent systems were considered to be as one of the major fields of study in Artificial Intelligence (AI) field. Many different definitions of agents were presented and several different approaches describing agency can be distinguished. While some authors have tried to define “what” an agent really is, others have tried to identify agents by means of properties which they should possess. Most authors agree on these properties (at least basic set of properties) which are intrinsic to agents. Since agent\u27s definitions are not consistent, we are going to give an overview and list the properties intrinsic to an agent. Many different adjectives were attached to the term agent as well and many different kinds of agents and different architectures emerged too. The aim of this paper it go give an overview of what was going on in the field while taking into consideration main streams and projects. We will also present some guidelines important when modelling agent systems and say something about security issues. Also, some existing problems which restrict the wider usage of agents will be mentioned too

    Practical Verification of Decision-Making in Agent-Based Autonomous Systems

    Get PDF
    We present a verification methodology for analysing the decision-making component in agent-based hybrid systems. Traditionally hybrid automata have been used to both implement and verify such systems, but hybrid automata based modelling, programming and verification techniques scale poorly as the complexity of discrete decision-making increases making them unattractive in situations where complex log- ical reasoning is required. In the programming of complex systems it has, therefore, become common to separate out logical decision-making into a separate, discrete, component. However, verification techniques have failed to keep pace with this devel- opment. We are exploring agent-based logical components and have developed a model checking technique for such components which can then be composed with a sepa- rate analysis of the continuous part of the hybrid system. Among other things this allows program model checkers to be used to verify the actual implementation of the decision-making in hybrid autonomous systems

    Mutation for Multi-Agent Systems

    Get PDF
    Although much progress has been made in engineering multi-agent systems (MAS), many issues remain to be resolved. One issue is that there is a lack of techniques that can adequately evaluate the effectiveness (fault detection ability) of tests or testing techniques for MAS. Another is that there are no systematic approaches to evaluating the impact of possible semantic changes (changes in the interpretation of agent programs) on agents' behaviour and performance. This thesis introduces syntactic and semantic mutation to address these two issues. Syntactic mutation is a technique that systematically generates variants ("syntactic mutants") of a description (usually a program) following a set of rules ("syntactic mutation operators"). Each mutant is expected to simulate a real description fault, therefore, the effectiveness of a test set can be evaluated by checking whether it can detect each simulated fault, in other words, distinguish the original description from each mutant. Although syntactic mutation is widely considered very effective, only limited work has been done to introduce it into MAS. This thesis extends syntactic mutation for MAS by proposing a set of syntactic mutation operators for the Jason agent language and showing that they can be used to generate real faults in Jason agent programs. By contrast, semantic mutation systematically generates variant interpretations ("semantic mutants") of a description following a set of rules ("semantic mutation operators"). Semantic mutation has two uses: to evaluate the effectiveness of a test set by simulating faults caused by misunderstandings of how the description is interpreted, and to evaluate the impact of possible semantic changes on agents' behaviour and performance. This thesis, for the first time, proposes semantic mutation for MAS, more specifically, for three logic based agent languages, namely Jason, GOAL and 2APL. It proposes semantic mutation operators for these languages, shows that the operators for Jason can represent real misunderstandings and are practically useful

    Coordination Of Hierarchical Command And Control Services

    Get PDF
    The purpose of this program is to show emerging information technologies can significantly improve key areas of tactical operations, resulting in the conversion of software developed under the ATO to existing battlefield systems. One such key area is Information Dissemination and Management (ID&M). The key software that will be developed under the ID&M portion requires a collection of agent-based software services that will collaborate during tactical mission planning and execution

    Learning plan selection for BDI agent systems

    Get PDF
    Belief-Desire-Intention (BDI) is a popular agent-oriented programming approach for developing robust computer programs that operate in dynamic environments. These programs contain pre-programmed abstract procedures that capture domain know-how, and work by dynamically applying these procedures, or plans, to different situations that they encounter. Agent programs built using the BDI paradigm, however, do not traditionally do learning, which becomes important if a deployed agent is to be able to adapt to changing situations over time. Our vision is to allow programming of agent systems that are capable of adjusting to ongoing changes in the environment’s dynamics in a robust and effective manner. To this end, in this thesis we develop a framework that can be used by programmers to build adaptable BDI agents that can improve plan selection over time by learning from their experiences. These learning agents can dynamically adjust their choice of which plan to select in which situation, based on a growing understanding of what works and a sense of how reliable this understanding is. This reliability is given by a perceived measure of confidence, that tries to capture how well-informed the agent’s most recent decisions were and how well it knows the most recent situations that it encountered. An important focus of this work is to make this approach practical. Our framework allows learning to be integrated into BDI programs of reasonable complexity, including those that use recursion and failure recovery mechanisms. We show the usability of the framework in two complete programs: an implementation of the Towers of Hanoi game where recursive solutions must be learnt, and a modular battery system controller where the environment dynamics changes in ways that may require many learning and relearning phases
    corecore