7 research outputs found

    Model checking multi-agent systems

    Get PDF
    A multi-agent system (MAS) is usually understood as a system composed of interacting autonomous agents. In this sense, MAS have been employed successfully as a modelling paradigm in a number of scenarios, especially in Computer Science. However, the process of modelling complex and heterogeneous systems is intrinsically prone to errors: for this reason, computer scientists are typically concerned with the issue of verifying that a system actually behaves as it is supposed to, especially when a system is complex. Techniques have been developed to perform this task: testing is the most common technique, but in many circumstances a formal proof of correctness is needed. Techniques for formal verification include theorem proving and model checking. Model checking techniques, in particular, have been successfully employed in the formal verification of distributed systems, including hardware components, communication protocols, security protocols. In contrast to traditional distributed systems, formal verification techniques for MAS are still in their infancy, due to the more complex nature of agents, their autonomy, and the richer language used in the specification of properties. This thesis aims at making a contribution in the formal verification of properties of MAS via model checking. In particular, the following points are addressed: • Theoretical results about model checking methodologies for MAS, obtained by extending traditional methodologies based on Ordered Binary Decision Diagrams (OBDDS) for temporal logics to multi-modal logics for time, knowledge, correct behaviour, and strategies of agents. Complexity results for model checking these logics (and their symbolic representations). • Development of a software tool (MCMAS) that permits the specification and verification of MAS described in the formalism of interpreted systems. • Examples of application of MCMAS to various MAS scenarios (communication, anonymity, games, hardware diagnosability), including experimental results, and comparison with other tools available

    Safety monitoring for autonomous systems: interactive elicitation of safety rules

    Get PDF
    Un moniteur de sécurité actif est un mécanisme indépendant qui est responsable de maintenir le système dans un état sûr, en cas de situation dangereuse. Il dispose d'observations (capteurs) et d'interventions (actionneurs). Des règles de sécurité sont synthétisées, à partir des résultats d'une analyse de risques, grâce à l'outil SMOF (Safety MOnitoring Framework), afin d'identifier quelles interventions appliquer quand une observation atteint une valeur dangereuse. Les règles de sécurité respectent une propriété de sécurité (le système reste das un état sûr) ainsi que des propriétés de permissivité, qui assurent que le système peut toujours effectuer ses tâches. Ce travail se concentre sur la résolution de cas où la synthèse échoue à retourner un ensemble de règles sûres et permissives. Pour assister l'utilisateur dans ces cas, trois nouvelles fonctionnalités sont introduites et développées. La première adresse le diagnostique des raisons pour lesquelles une règle échoue à respecter les exigences de permissivité. La deuxième suggère des interventions de sécurité candidates à injecter dans le processus de synthèse. La troisième permet l'adaptation des exigences de permissivités à un ensemble de tâches essentielles à préserver. L'utilisation des ces trois fonctionnalités est discutée et illustrée sur deux cas d'étude industriels, un robot industriel de KUKA et un robot de maintenance de Sterela.An active safety monitor is an independent mechanism that is responsible for keeping the system in a safe state, should a hazardous situation occur. Is has observations (sensors) and interventions (actuators). Safety rules are synthesized from the results of the hazard analysis, using the tool SMOF (Safety MOnitoring Framework), in order to identify which interventions to apply for dangerous observations values. The safety rules enforce a safety property (the system remains in a safe state) and some permissiveness properties, ensuring that the system can still perform its tasks. This work focuses on solving cases where the synthesis fails to return a set of safe and permissive rules. To assist the user in these cases, three new features are introduced and developed. The first one addresses the diagnosis of why the rules fail to fulfill a permissiveness requirement. The second one suggests candidate safety interventions to inject into the synthesis process. The third one allows the tuning of the permissiveness requirements based on a set of essential functionalities to maintain. The use of these features is discussed and illustrated on two industrial case studies, a manufacturing robot from KUKA and a maintenance robot from Sterela

    On Computer-Aided Methods for Modeling and Analysis of Organizations

    Get PDF
    Treur, J. [Promotor

    Verification-driven design and programming of autonomous robots

    Get PDF
    corecore