34 research outputs found

    Using proof failures to help debugging MAS

    Get PDF
    International audienceFor several years, we have worked on the usage of theorem proving techniques to validate Multi-Agent Systems. In this article, we present a preliminary case study, that is part of larger work whose long-term goal is to determine how proof tools can be used to help to develop error-free Multi-Agent Systems. This article describes how an error caused by a synchronisation problem between several agents can be identied by a proof failure. We also show that analysing proof failures can help to nd bugs that may occur only in a very particular context, which makes it dicult to analyse by standard debugging techniques. 1 Introduction This article takes place in the general context of the validation of Multi-Agents Systems, and more specically in the tuning stage. Indeed, for several years now, we have worked on the validation of MAS thanks to proof techniques. This is why the designed the GDT4MAS model (Mermet and Simon, 2009) has been designed, which provides both formal tools to speciy Multi-Agent Systems and a proof system that generates automatically , from a formal specication, a set of Proof Obligations that must be proven to guarantee the correctness of the system. In the same time, we have begun to study how to answer to the following question: What happens if the theorem prover does not manage to carry out the proof ?. More precisely, is it possible to learn anything from this failures (that we call in the sequel proof failures), in order to de-bug the MAS ? Answering to this question in a general context is tricky. Indeed, a rst remark is that a proof failure may occur in three dierent cases: • rst case: a true theorem is not provable (Gödel Incompleteness Theorem). Indeed, theorems generated by GDT4MAS are rst-order logic formulae, with arithmetic, which is typically the contexy where Gödel has established that there are non provable true theorems; • second case: a true theorem can not be automatically proven by the prover because rst-ordre logic is semidecidable. It means that there is not any automatic strategy that can prove all probable theorems. An ad hoc strategy must be provided by an expert. • third case: an error in the MAS specication has led to generate a false theorem that, hence, cannot be proven. So, when a proof failure is considered, the rst problem is to determine the case it corresponds to. It would be rather long and o-topic to give complete explanations here. However, it is important to knwow that the proof system has been designed to generate theorems that have good chances to be proven by standard strategies of provers, without requiring the expertise of a human. Moreover, unprovable true theorems generally do not correspond to real cases. Thus, in most cases, a proof failure will correspond to a mistake in the specication, and this is the context that is considered in the sequel. The subject of our study is then the following: if some generated proof obligations are note proven automatically, can we learn from that in order to help to correct the specication of the MAS ? So, the main idea is to check wether proof failures can be used to detect, even correct bugs in the specication of the MAS. Indeed, contrary to what is presented in (Das-tani and Meyer, 2010), where authors conside

    Using proof failures to help debugging MAS

    Get PDF
    International audienceFor several years, we have worked on the usage of theorem proving techniques to validate Multi-Agent Systems. In this article, we present a preliminary case study, that is part of larger work whose long-term goal is to determine how proof tools can be used to help to develop error-free Multi-Agent Systems. This article describes how an error caused by a synchronisation problem between several agents can be identied by a proof failure. We also show that analysing proof failures can help to nd bugs that may occur only in a very particular context, which makes it dicult to analyse by standard debugging techniques. 1 Introduction This article takes place in the general context of the validation of Multi-Agents Systems, and more specically in the tuning stage. Indeed, for several years now, we have worked on the validation of MAS thanks to proof techniques. This is why the designed the GDT4MAS model (Mermet and Simon, 2009) has been designed, which provides both formal tools to speciy Multi-Agent Systems and a proof system that generates automatically , from a formal specication, a set of Proof Obligations that must be proven to guarantee the correctness of the system. In the same time, we have begun to study how to answer to the following question: What happens if the theorem prover does not manage to carry out the proof ?. More precisely, is it possible to learn anything from this failures (that we call in the sequel proof failures), in order to de-bug the MAS ? Answering to this question in a general context is tricky. Indeed, a rst remark is that a proof failure may occur in three dierent cases: • rst case: a true theorem is not provable (Gödel Incompleteness Theorem). Indeed, theorems generated by GDT4MAS are rst-order logic formulae, with arithmetic, which is typically the contexy where Gödel has established that there are non provable true theorems; • second case: a true theorem can not be automatically proven by the prover because rst-ordre logic is semidecidable. It means that there is not any automatic strategy that can prove all probable theorems. An ad hoc strategy must be provided by an expert. • third case: an error in the MAS specication has led to generate a false theorem that, hence, cannot be proven. So, when a proof failure is considered, the rst problem is to determine the case it corresponds to. It would be rather long and o-topic to give complete explanations here. However, it is important to knwow that the proof system has been designed to generate theorems that have good chances to be proven by standard strategies of provers, without requiring the expertise of a human. Moreover, unprovable true theorems generally do not correspond to real cases. Thus, in most cases, a proof failure will correspond to a mistake in the specication, and this is the context that is considered in the sequel. The subject of our study is then the following: if some generated proof obligations are note proven automatically, can we learn from that in order to help to correct the specication of the MAS ? So, the main idea is to check wether proof failures can be used to detect, even correct bugs in the specication of the MAS. Indeed, contrary to what is presented in (Das-tani and Meyer, 2010), where authors conside

    How Variables Graphs May Help to Correct Erroneous MAS Specifications

    No full text
    This paper is situated in the context of multi-agent systems validation using theorem proving techniques. This document presents a preliminary case study, which is a part of a broader investigation aiming to explore whether such techniques could aid developers in detecting and characterizing errors in MAS specifications. Indeed, regardless of the verification system used (model checking or theorem proving), understanding the reason of a failure in order to correct the specification is most of the time rather difficult. In this article, we propose a method that may help in this task. This method relies on the variables (and their dependencies) that appear in proof obligations generated by GDT4MAS, a specification and verification method dedicated to Multi-Agent Systems. Graphs generated thanks to dependencies between variables occurring in an unproved theorem may indeed help to identify certain types of mistakes, giving a way to correct the specification

    Formal Verication of Ethical Properties in Multiagent Systems

    No full text
    International audienceThe increasing use of autonomous articial agents in hospitals or in transport control systems leads to consider whether moral rules shared by many of us are followed by these agents. This is a particularly hard problem because most of these moral rules are often not compatible. In such cases, humans usually follow ethical rules to promote one moral rule or another. Using formal verication to ensure that an agent follows a given ethical rule could help in increasing the condence in articial agents. In this article, we show how a set of formal properties can be obtained from an ethical rule ordering conicting moral rules. If the behaviour of an agent entails these properties (which can be proven using our existing proof framework), it means that this agent follows this ethical rule

    Using GDT4MAS as a Formal Support for Engineering Multi-Agents Systems

    No full text
    International audienceThis paper focuses on multi-agent systems engineering process. An assessment of current needs in this domain, based on the analysis of systems already developed, is performed. This assessment shows that the formal verification of MAS is one of these needs. It is then shown how the formal approach GDT4MAS provides answer to many of the other needs. This approach is based on a MAS formal specification associated to a proof process allowing to establish the correctness of properties of the system. The main purpose of this paper is to show that, unlike most other formal approaches for MAS, GDT4MAS can at the same time propose formal aspects making a proof possible and contribute to different general aspects of agent-oriented software engineering, even when formal verification is not a concern

    Spécifier des agents composés d'agents avec les GDTs

    No full text
    International audienceDans cet article, nous spécifions une nouvelle extension du formalisme Goal Decomposition Tree pour décrire des agents composés d'agents. Cette notion correspond à une forme de décomposition particulière de but permettant d'introduire des agents spécifiques qui ont en charge la résolution des sous-buts du but décomposé ainsi. Nous décrivons la sémantique formelle de cette décomposition et nous définissons différents opérateurs pour la mettre en oeuvre. Nous décrivons également des cas d'utilisation typique de ce nouveau type de décomposition. Enfin, afin de préserver l'objectif principal des GDT (prouver des comportements d'agents), nous fournissons les schémas de preuve permettant de prouver la correction de tels agents. In this article, we formalize the notion of an agent made of agents by extending the Goal Decomposition Tree formalism. We not only give a formal semantics to this decomposition but we also define operators to introduce various ways of recursively defining agents. We also present design patterns that show various use cases of meta-agents. Finally, to preserve the essential GDT property that consists in allowing to prove agents behaviours, we give proof schemas that allow to prove the correctness of a meta-agent

    Formal Verication of Ethical Properties in Multiagent Systems

    No full text
    International audienceThe increasing use of autonomous articial agents in hospitals or in transport control systems leads to consider whether moral rules shared by many of us are followed by these agents. This is a particularly hard problem because most of these moral rules are often not compatible. In such cases, humans usually follow ethical rules to promote one moral rule or another. Using formal verication to ensure that an agent follows a given ethical rule could help in increasing the condence in articial agents. In this article, we show how a set of formal properties can be obtained from an ethical rule ordering conicting moral rules. If the behaviour of an agent entails these properties (which can be proven using our existing proof framework), it means that this agent follows this ethical rule

    Specifying and Verifying Holonic Agents with GDT4MAS

    No full text
    International audienceThis paper describes how specific holonic multi-agent systems can be specified and how their correctness can be proven with an extended version of the GDT4MAS model. This model allows the specification of multi-agent systems and the verification of their correctness with theorem proving techniques. Introducing holonic agents in this model allows the enhancement of its expressiveness. Moreover, the proof system associated to this model can be easily extended in order to prove the correctness of multi-agent systems using such agents. The paper first describes the initial GDT4MAS model. Then the need for some kinds of holonic agents and their proposed specification based on specific decomposition operators are presented. It is followed by a focus on how the proof system can be adapted to prove the correctness of the behaviour of these new agents. Last but not least, all these proposals are illustrated on a case study

    {GDT4MAS}: an extension of the GDT model to specify and to verify MultiAgent Systems

    No full text
    International audienceThe Goal Decomposition Tree model has been introduced in 2005 by Mermet et al. [9] to specify and verify the behaviour of an agent evolving in a dynamic environment. This model presents many interesting characteristics such as its compositional aspect and the definition of proven proof schemas making the proof mechanism reliable. Being interested in specifying and verifying multiagent systems, we have decided to extend the GDT model for specifying Multiagent systems. The object of this article is to present this extension. So, after a brief description of the initial GDT model, we show how we extend it by introducing the specification of the whole MAS. We also introduce the notions of agent type and agent, and we show how external goals allow to specify collaborative agents and to prove the correctness of their collaboration. These notions are illustrated on a toy example of the litterature

    GDT4MAS: a formal model and language to specify and verify agent-based complex systems

    No full text
    International audienceIn this article, we briefly present the GDT4MAS model, a formal specification model dedicated to Multi-Agent Systems. We especially explain why we concieved a dedicated model and method, and how we associated to this method a few essential characteristics. We also try to explain why this model is particularly suited to complex systems. We also present the proof process provided by the model. We illustrate on a toy example the proof process of the GDT4MAS model and we show how an automatic verification can be performed thanks to a theorem prover like PVS
    corecore