59 research outputs found

    On the Modeling and Verification of Collective and Cooperative Systems

    Get PDF
    none1noThe formal description and verification of networks of cooperative and interacting agents is made difficult by the interplay of several different behavioral patterns, models of communication, scalability issues. In this paper, we will explore the functionalities and the expressiveness of a general-purpose process algebraic framework for the specification and model checking based analysis of collective and cooperative systems. The proposed syntactic and semantic schemes are general enough to be adapted with small modifications to heterogeneous application domains, like, e.g., crowdsourcing systems, trustworthy networks, and distributed ledger technologies.Aldini, AlessandroAldini, Alessandr

    Conceptual Models for Assessment & Assurance of Dependability, Security and Privacy in the Eternal CONNECTed World

    Get PDF
    This is the first deliverable of WP5, which covers Conceptual Models for Assessment & Assurance of Dependability, Security and Privacy in the Eternal CONNECTed World. As described in the project DOW, in this document we cover the following topics: • Metrics definition • Identification of limitations of current V&V approaches and exploration of extensions/refinements/ new developments • Identification of security, privacy and trust models WP5 focus is on dependability concerning the peculiar aspects of the project, i.e., the threats deriving from on-the-fly synthesis of CONNECTors. We explore appropriate means for assessing/guaranteeing that the CONNECTed System yields acceptable levels for non-functional properties, such as reliability (e.g., the CONNECTor will ensure continued communication without interruption), security and privacy (e.g., the transactions do not disclose confidential data), trust (e.g., Networked Systems are put in communication only with parties they trust). After defining a conceptual framework for metrics definition, we present the approaches to dependability in CONNECT, which cover: i) Model-based V&V, ii) Security enforcement and iii) Trust management. The approaches are centered around monitoring, to allow for on-line analysis. Monitoring is performed alongside the functionalities of the CONNECTed System and is used to detect conditions that are deemed relevant by its clients (i.e., the other CONNECT Enablers). A unified lifecycle encompassing dependability analysis, security enforcement and trust management is outlined, spanning over discovery time, synthesis time and execution time

    Specification and automatic verification of trust-based multi-agent systems

    Get PDF
    We present a new logic-based framework for modeling and automatically verifying trust in Multi-Agent Systems (MASs). We start by refining TCTL, a temporal logic of trust that extends the Computation Tree Logic (CTL) to enable reasoning about trust with preconditions. A new vector-based version of interpreted systems is defined to capture the trust relationship between the interacting parties. We introduce a set of reasoning postulates along with formal proofs to support our logic. Moreover, we present new symbolic model checking algorithms to formally and automatically verify the system under consideration against some desirable properties expressed using the proposed logic. We fully implemented our proposed algorithms as a model checker tool called MCMAS-T on top of the MCMAS model checker for MASs along with its new input language VISPL (Vector-extended ISPL). We evaluated the tool and reported experimental results using a real-life scenario in the healthcare field

    Online Markov Chain Learning for Quality of Service Engineering in Adaptive Computer Systems

    Get PDF
    Computer systems are increasingly used in applications where the consequences of failure vary from financial loss to loss of human life. As a result, significant research has focused on the model-based analysis and verification of the compliance of business-critical and security-critical computer systems with their requirements. Many of the formalisms proposed by this research target the analysis of quality-of-service (QoS) computer system properties such as reliability, performance and cost. However, the effectiveness of such analysis or verification depends on the accuracy of the QoS models they rely upon. Building accurate mathematical models for critical computer systems is a great challenge. This is particularly true for systems used in applications affected by frequent changes in workload, requirements and environment. In these scenarios, QoS models become obsolete unless they are continually updated to reflect the evolving behaviour of the analysed systems. This thesis introduces new techniques for learning the parameters and the structure of discrete-time Markov chains, a class of models that is widely used to establish key reliability, performance and other QoS properties of real-world systems. The new learning techniques use as input run-time observations of system events associated with costs/rewards and transitions between the states of a model. When the model structure is known, they continually update its state transition probabilities and costs/rewards in line with the observed variations in the behaviour of the system. In scenarios when the model structure is unknown, a Markov chain is synthesised from sequences of such observations. The two categories of learning techniques underpin the operation of a new toolset for the engineering of self-adaptive service-based systems, which was developed as part of this research. The thesis introduces this software engineering toolset, and demonstrates its effectiveness in a case study that involves the development of a prototype telehealth service-based system capable of continual self-verification

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications

    Using probabilistic model checking to balance games

    Get PDF
    In this thesis, we consider problem areas in game development and use probabilistic model checking to address them. In particular, we address the problem of multiplayer game balancing and introduce an approach called Chained Strategy Generation (CSG). This technique uses model checking to generate synthetic player data representing a game-playing community moving between effective strategies. The results of CSG mimic the metagame, an ever-evolving state of play describing the players’ collective understanding of what strategies are effective. We expand upon CSG with optimality networks, a visualisation that compares game material and can be used to show that a game exhibits certain qualities necessary for balance. We demonstrate our approach using a purpose-built mobile game (RPGLite). We initially balanced RPGLite using our technique and collected data from real world players via the mobile app. The application and its development are described in detail. The gathered data is then used to show that the model checking did lead to a well-balanced game. We compare the analysis performed from model checking to the gameplay data and refine the baseline qualities of a balanced game which model checking can be used to guarantee. We show how the collected data via the mobile app can be used in conjunction with the prior model checking to calculate action-costs – the difference between the value of the action chosen and the best action available. We use action-costs to evaluate player skill and to consider other factors of the game

    Design of Approaches for Dependability and Initial Prototypes

    Get PDF
    The aim of CONNECT is to achieve universal interoperability between heterogeneous Networked Systems. For this, the non-functional properties required at each side of the connection going to be established must be fulfilled. By the one inclusive term "CONNECTability" we comprehend properties belonging to all four non-functional concerns of interest for CONNECT, namely dependability, performance, security and trust. We model such properties in conformance with a meta-model which establishes the relevant concepts and their relations. Then, building on the conceptual models proposed in the first year in Deliverable D5.1, in this document we present the approaches developed for assuring CONNECTability both at synthesis time and at runtime. The contributions include: the Dependability&Performance analysis Enabler, for which we release a modular architecture supporting stochastic verification and state-based analysis; incremental verification and event-based monitoring for runtime analysis; a model-based approach to interoperable trust management; the Security-by-Contract-with-Trust framework, which guarantees and enforces the expected trust levels and security policies

    Consolidated dependability framework

    Get PDF
    The aim of CONNECT is to achieve universal interoperability between heterogeneous Networked Systems. For this, the non-functional properties required at each side of the connection going to be established, which we refer to by the one inclusive term "CONNECTability", must be fulfilled. In Deliverable D5.1 we conceived the conceptual models at the foundation of CONNECTability. In D5.2 we then presented a first version of the approaches and of their respective enablers that we developed for assuring CONNECTability both at synthesis time and at run-time. In this deliverables, we present the advancements and contributions achieved in the third year, which include: - a refinement of the CONNECT Property Meta-Model, with a preliminary implementation of a Model-to-Code translator; - an enhanced implementation of the Dependability&Performance analysis Enabler, supporting stochastic verification and state-based analysis, that is enriched with mechanisms for providing feedback to the Synthesis enabler based on monitor's run-time observations; - a fully running version of the Security Enabler, following the Security-by-Contract-with-Trust methodology, for the monitoring and enforcement of CONNECT related security policies; - a complete (XML) definition of the Trust Model Description Language, an editor and the corresponding implementation of supporting tools to be integrated into the Trust Management Enabler

    Checking Trustworthiness of Probabilistic Computations in a Typed Natural Deduction System

    Full text link
    In this paper we present the probabilistic typed natural deduction calculus TPTND, designed to reason about and derive trustworthiness properties of probabilistic computational processes, like those underlying current AI applications. Derivability in TPTND is interpreted as the process of extracting nn samples of possibly complex outputs with a certain frequency from a given categorical distribution. We formalize trust for such outputs as a form of hypothesis testing on the distance between such frequency and the intended probability. The main advantage of the calculus is to render such notion of trustworthiness checkable. We present a computational semantics for the terms over which we reason and then the semantics of TPTND, where logical operators as well as a Trust operator are defined through introduction and elimination rules. We illustrate structural and metatheoretical properties, with particular focus on the ability to establish under which term evolutions and logical rules applications the notion of trustworhtiness can be preserved
    • …
    corecore