2,247 research outputs found

    Automatic Verification of Communicative Commitments using Reduction

    Get PDF
    In spite of the fact that modeling and verification of the Multi-Agent Systems (MASs) have been since long under study, there are several related challenges that should still be addressed. In effect, several frameworks have been established for modeling and verifying the MASs with regard to communicative commitments. A bulky volume of research has been conducted for defining semantics of these systems. Though, formal verification of these systems is still unresolved research problem. Within this context, this paper presents the CTLcom that reforms the CTLC, i.e., the temporal logic of the commitments, so as to enable reasoning about the commitments and fulfillment.  Moreover, the paper introduces a fully-automated method for verification of the logic by means of trimming down the problem of a model that checks the CTLcom to a problem of a model that checks the GCTL*, which is a generalized version of the CTL* with action formulae. By so doing, we take advantage of the CWB-NC automata-based model checker as a tool for verification. Lastly, this paper presents a case study drawn from the business field, that is, the NetBill protocol, illustrates its implementation, and discusses the associated experimental results in order to illustrate the efficiency and effectiveness of the suggested technique.   Keywords: Multi-Agent Systems, Model Checking, Communicative commitment's, Reduction

    Model Checking Trust-based Multi-Agent Systems

    Get PDF
    Trust has been the focus of many research projects, both theoretical and practical, in the recent years, particularly in domains where open multi-agent technologies are applied (e.g., Internet-based markets, Information retrieval, etc.). The importance of trust in such domains arises mainly because it provides a social control that regulates the relationships and interactions among agents. Despite the growing number of various multi-agent applications, they still encounter many challenges in their formal modeling and the verification of agents’ behaviors. Many formalisms and approaches that facilitate the specifications of trust in Multi-Agent Systems (MASs) can be found in the literature. However, most of these approaches focus on the cognitive side of trust where the trusting entity is normally capable of exhibiting properties about beliefs, desires, and intentions. Hence, the trust is considered as a belief of an agent (the truster) involving ability and willingness of the trustee to perform some actions for the truster. Nevertheless, in open MASs, entities can join and leave the interactions at any time. This means MASs will actually provide no guarantee about the behavior of their agents, which makes the capability of reasoning about trust and checking the existence of untrusted computations highly desired. This thesis aims to address the problem of modeling and verifying at design time trust in MASs by (1) considering a cognitive-independent view of trust where trust ingredients are seen from a non-epistemic angle, (2) introducing a logical language named Trust Computation Tree Logic (TCTL), which extends CTL with preconditional, conditional, and graded trust operators along with a set of reasoning postulates in order to explore its capabilities, (3) proposing a new accessibility relation which is needed to define the semantics of the trust modal operators. This accessibility relation is defined so that it captures the intuition of trust while being easily computable, (4) investigating the most intuitive and efficient algorithm for computing the trust set by developing, implementing, and experimenting different model checking techniques in order to compare between them in terms of memory consumption, efficiency, and scalability with regard to the number of considered agents, (5) evaluating the performance of the model checking techniques by analyzing the time and space complexity. The approach has been applied to different application domains to evaluate its computational performance and scalability. The obtained results reveal the effectiveness of the proposed approach, making it a promising methodology in practice

    Mobile agent path planning under uncertain environment using reinforcement learning and probabilistic model checking

    Get PDF
    The major challenge in mobile agent path planning, within an uncertain environment, is effectively determining an optimal control model to discover the target location as quickly as possible and evaluating the control system's reliability. To address this challenge, we introduce a learning-verification integrated mobile agent path planning method to achieve both the effectiveness and the reliability. More specifically, we first propose a modified Q-learning algorithm (a popular reinforcement learning algorithm), called Q EA−learning algorithm, to find the best Q-table in the environment. We then determine the location transition probability matrix, and establish a probability model using the assumption that the agent selects a location with a higher Q-value. Secondly, the learnt behaviour of the mobile agent based on Q EA−learning algorithm, is formalized as a Discrete-time Markov Chain (DTMC) model. Thirdly, the required reliability requirements of the mobile agent control system are specified using Probabilistic Computation Tree Logic (PCTL). In addition, the DTMC model and the specified properties are taken as the input of the Probabilistic Model Checker PRISM for automatic verification. This is preformed to evaluate and verify the control system's reliability. Finally, a case study of a mobile agent walking in a grids map is used to illustrate the proposed learning algorithm. Here we have a special focus on the modelling approach demonstrating how PRISM can be used to analyse and evaluate the reliability of the mobile agent control system learnt via the proposed algorithm. The results show that the path identified using the proposed integrated method yields the largest expected reward.</p

    Modeling and Verifying Probabilistic Social Commitments in Multi-Agent Systems

    Get PDF
    Interaction among autonomous agents in Multi-Agent Systems (MASs) is the key aspect for solving complex problems that an individual agent cannot handle alone. In this context, social approaches, as opposed to the mental approaches, have recently received a considerable attention in the area of agent communication. They exploit observable social commitments to develop a verifiable formal semantics by which communication protocols can be specified. However, existing approaches for defining social commitments tend to assume an absolute guarantee of correctness so that systems run in a certain manner. That is, social commitments have always been modeled with the assumption of certainty. Moreover, the widespread use of MASs increases the interest to explore the interactions between different aspects of the participating agents such as the interaction between agents’ knowledge and social commitments in the presence of uncertainty. This results in having a gap, in the literature of agent communication, on modeling and verifying social commitments in probabilistic settings. In this thesis, we aim to address the above-mentioned problems by presenting a practical formal framework that is capable of handling the problem of uncertainty in social commitments. First, we develop an approach for representing, reasoning about, and verifying probabilistic social commitments in MASs. This includes defining a new logic called the probabilistic logic of commitments (PCTLC), and a reduction-based model checking procedure for verifying the proposed logic. In the reduction technique, the problem of model checking PCTLC is transformed into the problem of model checking PCTL so that the use of the PRISM (Probabilistic Symbolic Model Checker) is made possible. Formulae of PCTLC are interpreted over an extended version of the probabilistic interpreted systems formalism. Second, we extend the work we proposed for probabilistic social commitments to be able to capture and verify the interactions between knowledge and commitments. Properties representing the interactions between the two aspects are expressed in a new developed logic called the probabilistic logic of knowledge and commitment (PCTLkc). Third, we develop an adequate semantics for the group social commitments, for the first time in the literature, and integrate it into the framework. We then introduce an improved version of PCTLkc and extend it with operators for the group knowledge and group social commitments. The new refined logic is called PCTLkc+. In each of the latter stages, we respectively develop a new version of the probabilistic interpreted systems over which the presented logic is interpreted, and introduce a new reduction-based verification technique to verify the proposed logic. To evaluate our proposed work, we implement the proposed verification techniques on top of the PRISM model checker and apply them on several case studies. The results demonstrate the usefulness and effectiveness of our proposed work

    Logic-Based Specification Languages for Intelligent Software Agents

    Full text link
    The research field of Agent-Oriented Software Engineering (AOSE) aims to find abstractions, languages, methodologies and toolkits for modeling, verifying, validating and prototyping complex applications conceptualized as Multiagent Systems (MASs). A very lively research sub-field studies how formal methods can be used for AOSE. This paper presents a detailed survey of six logic-based executable agent specification languages that have been chosen for their potential to be integrated in our ARPEGGIO project, an open framework for specifying and prototyping a MAS. The six languages are ConGoLog, Agent-0, the IMPACT agent programming language, DyLog, Concurrent METATEM and Ehhf. For each executable language, the logic foundations are described and an example of use is shown. A comparison of the six languages and a survey of similar approaches complete the paper, together with considerations of the advantages of using logic-based languages in MAS modeling and prototyping.Comment: 67 pages, 1 table, 1 figure. Accepted for publication by the Journal "Theory and Practice of Logic Programming", volume 4, Maurice Bruynooghe Editor-in-Chie

    08361 Abstracts Collection -- Programming Multi-Agent Systems

    Get PDF
    From 31th August to 5th September, the Dagstuhl Seminar 08361 ``Programming Multi-Agent Systems\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    An Unexpected Journey: Towards Runtime Verification of Multiagent Systems and Beyond

    Get PDF
    The Trace Expression formalism derives from works started in 2012 and is mainly used to specify and verify interaction protocols at runtime, but other applications have been devised. More specically, this thesis describes how to extend and apply such formalism in the engineering process of distributed articial intelligence systems (such as Multiagent systems). This thesis extends the state of the art through four dierent contributions: 1. Theoretical: the thesis extends the original formalism in order to represent also parametric and probabilistic specications (parametric trace expressions and probabilistic trace expressions respectively). 2. Algorithmic: the thesis proposes algorithms for verifying trace expressions at runtime in a decentralized way. The algorithms have been designed to be as general as possible, but their implementation and experimentation address scenarios where the modelled and observed events are communicative events (interactions) inside a multiagent system. 3. Application: the thesis analyzes the relations between runtime and static verication (e.g. model checking) proposing hybrid integrations in both directions. First of all, the thesis proposes a trace expression model checking approach where it shows how to statically verify LTL property on a trace expression specication. After that, the thesis presents a novel approach for supporting static verication through the addition of monitors at runtime (post-process). 4. Implementation: the thesis presents RIVERtools, a tool supporting the writing, the syntactic analysis and the decentralization of trace expressions

    Analyzing the Interaction between Knowledge and Social Commitments in Multi-Agent Systems

    Get PDF
    Both knowledge and social commitments in Multi-Agent Systems (MASs) have long been under research independently, especially for agent communication. Plenty of work has been carried out to define their semantics. However, in concrete applications such as business settings and web-based applications, agents should reason about their knowledge and their social commitments at the same time, particularly when they are engaged in conversations. In fact, studying the interaction between knowledge and social commitments is still in its beginnings. Therefore, in this thesis, we aim to provide a practical and formal framework that analyzes the interaction between knowledge and communicative social commitments in MASs from the semantics, model checking, complexity, soundness and completeness perspectives. To investigate such an interaction, we, first, combine CTLK (an extension of computation Tree Logic (CTL) with modality for reasoning about knowledge) and CTLC (an extension of CTL with modalities for reasoning about commitments and their fulfillments) in one new logic named CTLKC. By so doing, we identify some paradoxes in the new logic showing that simply combining current versions of commitment and knowledge logics results in a language of logic that violates some fundamental intuitions. Consequently, we propose CTLKC+, a new consistent logic of knowledge and commitments that fixes the identified paradoxes and allows us to reason about social commitments and knowledge simultaneously in a consistent manner. Second, we use correspondence theory for modal logics to prove the soundness and completeness of CTLKC+. To do so, we develop a set of reasoning postulates in CTLKC+ and correspond them to certain classes of frames. The existence of such correspondence allows us to prove that the logic generated by any subset of these postulates is sound and complete, with respect to the models that are based on the corresponding frames. Third, we address the problem of model checking CTLKC+ by transforming it to the problem of model checking GCTL∗ (a generalized version of Extended Computation Tree Logic (CTL∗) with action formulas) and ARCTL (the combination of CTL with action formulas) in order to respectively use the CWB-NC automata-based model checker and the extended NuSMV symbolic model checker. Moreover, we prove that the transformation techniques are sound. Fourth, we analyze the complexity of the proposed model checking techniques. The results of this analysis reveal that the complexity of our transformation procedures is PSPACE-complete for local concurrent programs with respect to the size of these programs and the length of the formula being checked. From the time perspective, we prove that the complexity of the proposed approaches is P-complete with regard to the size of the model and length of the formula. Finally, we implement our model checking approaches and report some experimental results by verifying the well-known NetBell payment protocol against some desirable properties
    • …
    corecore