63 research outputs found

    Bounded Situation Calculus Action Theories

    Full text link
    In this paper, we investigate bounded action theories in the situation calculus. A bounded action theory is one which entails that, in every situation, the number of object tuples in the extension of fluents is bounded by a given constant, although such extensions are in general different across the infinitely many situations. We argue that such theories are common in applications, either because facts do not persist indefinitely or because the agent eventually forgets some facts, as new ones are learnt. We discuss various classes of bounded action theories. Then we show that verification of a powerful first-order variant of the mu-calculus is decidable for such theories. Notably, this variant supports a controlled form of quantification across situations. We also show that through verification, we can actually check whether an arbitrary action theory maintains boundedness.Comment: 51 page

    Agents and Robots for Reliable Engineered Autonomy

    Get PDF
    This book contains the contributions of the Special Issue entitled "Agents and Robots for Reliable Engineered Autonomy". The Special Issue was based on the successful first edition of the "Workshop on Agents and Robots for reliable Engineered Autonomy" (AREA 2020), co-located with the 24th European Conference on Artificial Intelligence (ECAI 2020). The aim was to bring together researchers from autonomous agents, as well as software engineering and robotics communities, as combining knowledge from these three research areas may lead to innovative approaches that solve complex problems related to the verification and validation of autonomous robotic systems

    Rational Agents: Prioritized Goals, Goal Dynamics, and Agent Programming Languages with Declarative Goals

    Get PDF
    I introduce a specification language for modeling an agent's prioritized goals and their dynamics. I use the situation calculus along with Reiter's solution to the frame problem and predicates for describing agents' knowledge as my base formalism. I further enhance this language by introducing a new sort of infinite paths. Within this language, I discuss how to systematically specify prioritized goals and how to precisely describe the effects of actions on these goals. These actions include adoption and dropping of goals and subgoals. In this framework, an agent's intentions are formally specified as the prioritized intersection of her goals. The ``prioritized'' qualifier above means that the specification must respect the priority ordering of goals when choosing between two incompatible goals. I ensure that the agent's intentions are always consistent with each other and with her knowledge. I investigate two variants with different commitment strategies. Agents specified using the ``optimizing'' agent framework always try to optimize their intentions, while those specified in the ``committed'' agent framework will stick to their intentions even if opportunities to commit to higher priority goals arise when these goals are incompatible with their current intentions. For these, I study properties of prioritized goals and goal change. I also give a definition of subgoals, and prove properties about the goal-subgoal relationship. As an application, I develop a model for a Simple Rational Agent Programming Language (SR-APL) with declarative goals. SR-APL is based on the ``committed agent'' variant of this rich theory, and combines elements from Belief-Desire-Intention (BDI) APLs and the situation calculus based ConGolog APL. Thus SR-APL supports prioritized goals and is grounded on a formal theory of goal change. It ensures that the agent's declarative goals and adopted plans are consistent with each other and with her knowledge. In doing this, I try to bridge the gap between agent theories and practical agent programming languages by providing a model and specification of an idealized BDI agent whose behavior is closer to what a rational agent does. I show that agents programmed in SR-APL satisfy some key rationality requirements

    Behavioural state machines

    Get PDF

    Two-stage agent program verification

    Get PDF

    Logic-based Technologies for Multi-agent Systems: A Systematic Literature Review

    Get PDF
    Precisely when the success of artificial intelligence (AI) sub-symbolic techniques makes them be identified with the whole AI by many non-computerscientists and non-technical media, symbolic approaches are getting more and more attention as those that could make AI amenable to human understanding. Given the recurring cycles in the AI history, we expect that a revamp of technologies often tagged as “classical AI” – in particular, logic-based ones will take place in the next few years. On the other hand, agents and multi-agent systems (MAS) have been at the core of the design of intelligent systems since their very beginning, and their long-term connection with logic-based technologies, which characterised their early days, might open new ways to engineer explainable intelligent systems. This is why understanding the current status of logic-based technologies for MAS is nowadays of paramount importance. Accordingly, this paper aims at providing a comprehensive view of those technologies by making them the subject of a systematic literature review (SLR). The resulting technologies are discussed and evaluated from two different perspectives: the MAS and the logic-based ones

    Verifying requirements for resource-bounded agents

    Get PDF
    This thesis presents frameworks for the modelling and verification of resource-bounded reasoning agents. The resources considered include the time, memory, and communication bandwidth required by agents to achieve a goal. The scalability and expressiveness of standard model checking techniques is investigated using two typical multiagent reasoning problems which can be easily parameterised to increase or decrease the problem size. Both a complexity analysis and experimental results suggest that reasonably sized problem instances are unlikely to be tractable for a standard model checker without steps to reduce the branching factor of the state space. We propose two approaches to address this problem: the use of abstract specifications to model the behaviour of some of the agents in the system, and exploiting information about the reasoning strategy adopted by the agents. Abstract specifications are given as Linear Temporal Logic (LTL) formulae which describe the external behaviour of the agents, allowing their temporal behaviour to be compactly modelled. Conversely, reasoning strategies allow the detailed specification of the ordering of steps in the agent’s reasoning process. Both approaches have been combined in an automated verification tool TVRBA for rule-based multi-agent systems which allows the designer to specify information about agents’ interaction, behaviour, and execution strategy at different levels of abstraction. The TVRBA tool generates an encoding of the system for the Maude LTL model checker, allowing properties of the system to be verified. The scalability of the new approach is illustrated using three case studies

    Model checking multi-agent systems

    Get PDF
    A multi-agent system (MAS) is usually understood as a system composed of interacting autonomous agents. In this sense, MAS have been employed successfully as a modelling paradigm in a number of scenarios, especially in Computer Science. However, the process of modelling complex and heterogeneous systems is intrinsically prone to errors: for this reason, computer scientists are typically concerned with the issue of verifying that a system actually behaves as it is supposed to, especially when a system is complex. Techniques have been developed to perform this task: testing is the most common technique, but in many circumstances a formal proof of correctness is needed. Techniques for formal verification include theorem proving and model checking. Model checking techniques, in particular, have been successfully employed in the formal verification of distributed systems, including hardware components, communication protocols, security protocols. In contrast to traditional distributed systems, formal verification techniques for MAS are still in their infancy, due to the more complex nature of agents, their autonomy, and the richer language used in the specification of properties. This thesis aims at making a contribution in the formal verification of properties of MAS via model checking. In particular, the following points are addressed: • Theoretical results about model checking methodologies for MAS, obtained by extending traditional methodologies based on Ordered Binary Decision Diagrams (OBDDS) for temporal logics to multi-modal logics for time, knowledge, correct behaviour, and strategies of agents. Complexity results for model checking these logics (and their symbolic representations). • Development of a software tool (MCMAS) that permits the specification and verification of MAS described in the formalism of interpreted systems. • Examples of application of MCMAS to various MAS scenarios (communication, anonymity, games, hardware diagnosability), including experimental results, and comparison with other tools available
    • …
    corecore