8 research outputs found

    Verification of Open Interactive Markov Chains

    Get PDF
    Interactive Markov chains (IMC) are compositional behavioral models extending both labeled transition systems and continuous-time Markov chains. IMC pair modeling convenience - owed to compositionality properties - with effective verification algorithms and tools - owed to Markov properties. Thus far however, IMC verification did not consider compositionality properties, but considered closed systems. This paper discusses the evaluation of IMC in an open and thus compositional interpretation. For this we embed the IMC into a game that is played with the environment. We devise algorithms that enable us to derive bounds on reachability probabilities that are assured to hold in any composition context

    Verification and Control of Turn-Based Probabilistic Real-Time Games

    Get PDF
    Quantitative verification techniques have been developed for the formal analysis of a variety of probabilistic models, such as Markov chains, Markov decision process and their variants. They can be used to produce guarantees on quantitative aspects of system behaviour, for example safety, reliability and performance, or to help synthesise controllers that ensure such guarantees are met. We propose the model of turn-based probabilistic timed multi-player games, which incorporates probabilistic choice, real-time clocks and nondeterministic behaviour across multiple players. Building on the digital clocks approach for the simpler model of probabilistic timed automata, we show how to compute the key measures that underlie quantitative verification, namely the probability and expected cumulative price to reach a target. We illustrate this on case studies from computer security and task scheduling

    Compositional Verification and Optimization of Interactive Markov Chains

    Full text link
    Interactive Markov chains (IMC) are compositional behavioural models extending labelled transition systems and continuous-time Markov chains. We provide a framework and algorithms for compositional verification and optimization of IMC with respect to time-bounded properties. Firstly, we give a specification formalism for IMC. Secondly, given a time-bounded property, an IMC component and the assumption that its unknown environment satisfies a given specification, we synthesize a scheduler for the component optimizing the probability that the property is satisfied in any such environment

    A tutorial on interactive Markov chains

    Get PDF
    Interactive Markov chains (IMCs) constitute a powerful sto- chastic model that extends both continuous-time Markov chains and labelled transition systems. IMCs enable a wide range of modelling and analysis techniques and serve as a semantic model for many industrial and scientific formalisms, such as AADL, GSPNs and many more. Applications cover various engineering contexts ranging from industrial system-on-chip manufacturing to satellite designs. We present a survey of the state-of-the-art in modelling and analysis of IMCs.\ud We cover a set of techniques that can be utilised for compositional modelling, state space generation and reduction, and model checking. The significance of the presented material and corresponding tools is highlighted through multiple case studies

    Choice and chance:model-based testing of stochastic behaviour

    Get PDF
    Probability plays an important role in many computer applications. A vast number of algorithms, protocols and computation methods uses randomisation to achieve their goals. A crucial question then becomes whether such probabilistic systems work as intended. To investigate this, such systems are often subjected to a large number of well-designed test cases, that compare a observed behaviour to a requirements specification. Model-based testing is an innovative testing technique rooted in formal methods, that aims at automating this labour intense and often error-prone manual task. By providing faster and more thorough testing at lower cost, it has gained rapid popularity in industry and academia alike. However, classic model-based testing methods are insufficient when dealing with inherently stochastic systems. This thesis introduces a rigorous model-based testing framework, that is capable to automatically test such systems. The presented methods are capable of judging functional correctness, discrete probability choices, and hard and soft-real time constraints. The framework is constructed in a clear step-by-step approach. First, the model-based testing landscape is laid out, and related work is discussed. Next, we instantiate a model-based testing framework to highlight the purpose of individual theoretical components like, e.g., a conformance relation, test cases, and practical test generation algorithms. This framework is then conservatively extended by introducing discrete probability choices to the specification language. A last step further extends this probabilistic framework by adding hard and soft real time constraints. Classical functional correctness verdicts are thus extended with goodness of fit methods known from statistics. Proofs of the framework’s correctness are presented before its capabilities are exemplified by studying smaller scale case studies known from the literature. The framework reconciles non-deterministic and probabilistic choices in a fully-fledged way via the use of schedulers. Schedulers then become a subject worthy to study in their own rights. This is done in the second part of this thesis; we introduce a most natural equivalence relation based on schedulers for Markov automata, and compare its distinguishing power to notions of trace distributions and bisimulation relations. Lastly, the power of different scheduler classes of stochastic automata is investigated. We compare reachability probabilities of different schedulers by altering the information available to them. A hierarchy of scheduler classes is established, with the intent to reduce complexity of related problems by gaining near optimal results for smaller scheduler classes

    Finite horizon analysis of Markov automata

    Get PDF
    Markov automata constitute an expressive continuous-time compositional modelling formalism, featuring stochastic timing and nondeterministic as well as probabilistic branching, all supported in one model. They span as special cases, the models of discrete and continuous-time Markov chains, as well as interactive Markov chains and probabilistic automata. Moreover, they might be equipped with reward and resource structures in order to be used for analysing quantitative aspects of systems, like performance metrics, energy consumption, repair and maintenance costs. Due to their expressive nature, they serve as semantic backbones of engineering frameworks, control applications and safety critical systems. The Architecture Analysis and Design Language (AADL), Dynamic Fault Trees (DFT) and Generalised Stochastic Petri Nets (GSPN) are just some examples. Their expressiveness thus far prevents them from efficient analysis by stochastic solvers and probabilistic model checkers. A major problem context of this thesis lies in their analysis under some budget constraints, i.e. when only a finite budget of resources can be spent by the model. We study mathematical foundations of Markov automata since these are essential for the analysis addressed in this thesis. This includes, in particular, understanding their measurability and establishing their probability measure. Furthermore, we address the analysis of Markov automata in the presence of both reward acquisition and resource consumption within a finite budget of resources. More specifically, we put the problem of computing the optimal expected resource-bounded reward in our focus. In our general setting, we support transient, instantaneous and final reward collection as well as transient resource consumption. Our general formulation of the problem encompasses in particular the optimal time-bound reward and reachability as well as resource-bounded reachability. We develop a sound theory together with a stable approximation scheme with a strict error bound to solve the problem in an efficient way. We report on an implementation of our approach in a supporting tool and also demonstrate its effectiveness and usability over an extensive collection of industrial and academic case studies.Markov-Automaten bilden einen mächtigen Formalismus zur kompositionellen Modellierung mit kontinuierlicher stochastischer Zeit und nichtdeterministischer sowie probabilistischer Verzweigung, welche alle in einem Modell unterstützt werden. Sie enthalten als Spezialfälle die Modelle diskreter und kontinuierlicher Markov-Ketten sowie interaktive Markov-Ketten und probabilistischer Automaten. Darüber hinaus können sie mit Belohnungs- und Ressourcenstrukturen ausgestattet werden, um quantitative Aspekte von Systemen wie Leistungsfähigkeit, Energieverbrauch, Reparatur- und Wartungskosten zu analysieren. Sie dienen aufgrund ihrer Ausdruckskraft als semantisches Rückgrat von Engineering Frameworks, Steuerungsanwendungen und sicherheitskritischen Systemen. Die Architekturanalyse und Designsprache (AADL), Dynamic Fault Trees (DFT) und Generalized Stochastic Petri Nets (GSPN) sind nur einige Beispiele dafür. Ihre Aussagekraft verhindert jedoch bisher eine effiziente Analyse durch stochastische Löser und probabilistische Modellprüfer. Ein wichtiger Problemzusammenhang dieser Arbeit liegt in ihrer Analyse unter Budgetbeschränkungen, das heisst wenn nur ein begrenztes Budget an Ressourcen vom Modell aufgewendet werden kann. Wir studieren mathematische Grundlagen von Markov-Automaten, da diese für die in dieser Arbeit angesprochene Analyse von wesentlicher Bedeutung sind. Dazu gehört insbesondere das Verständnis ihrer Messbarkeit und die Festlegung ihrer Wahrscheinlichkeitsmaßes. Darüber hinaus befassen wir uns mit der Analyse von Markov-Automaten in Bezug auf Belohnungserwerb sowie Ressourcenverbrauch innerhalb eines begrenzten Ressourcenbudgets. Genauer gesagt stellen wir das Problem der Berechnung der optimalen erwarteten Ressourcen-begrenzte Belohnung in unserem Fokus. Dieser Fokus umfasst transiente, sofortige und endgültige Belohnungssammlung sowie transienten Ressourcenverbrauch. Unsere allgemeine Formulierung des Problems beinhalet insbesondere die optimale zeitgebundene Belohnung und Erreichbarkeit sowie ressourcenbeschränkte Erreichbarkeit. Wir entwickeln die grundlegende Theorie dazu. Zur effizienten Lösung des Problems entwerfen wir ein stabilen Approximationsschema mit einer strikten Fehlerschranke. Wir berichten über eine Umsetzung unseres Ansatzes in einem Software-Werkzeug und zeigen seine Wirksamkeit und Verwendbarkeit anhand einer umfangreichen Sammlung von industriellen und akademischen Fallstudien
    corecore