8 research outputs found

    Deciding bisimilarities on distributions

    Get PDF
    Probabilistic automata (PA) are a prominent compositional concurrency model. As a way to justify property-preserving abstractions, in the last years, bisimulation relations over probability distributions have been proposed both in the strong and the weak setting. Different to the usual bisimulation relations, which are defined over states, an algorithmic treatment of these relations is inherently hard, as their carrier set is uncountable, even for finite PAs. The coarsest of these relation, weak distribution bisimulation, stands out from the others in that no equivalent state-based characterisation is known so far. This paper presents an equivalent state-based reformulation for weak distribution bisimulation, rendering it amenable for algorithmic treatment. Then, decision procedures for the probability distribution-based bisimulation relations are presented

    Time dependent analysis with dynamic counter measure trees

    Get PDF
    The success of a security attack crucially depends on time: the more time available to the attacker, the higher the probability of a successful attack. Formalisms such as Reliability block diagrams, Reliability graphs and Attack Countermeasure trees provide quantitative information about attack scenarios, but they are provably insufficient to model dependent actions which involve costs, skills, and time. In this presentation, we extend the Attack Countermeasure trees with a notion of time; inspired by the fact that there is a strong correlation between the amount of resources in which the attacker invests (in this case time) and probability that an attacker succeeds. This allows for an effective selection of countermeasures and rank them according to their resource consumption in terms of costs/skills of installing them and effectiveness in preventing an attack

    The Role of Diverse Replay for Generalisation in Reinforcement Learning

    Full text link
    In reinforcement learning (RL), key components of many algorithms are the exploration strategy and replay buffer. These strategies regulate what environment data is collected and trained on and have been extensively studied in the RL literature. In this paper, we investigate the impact of these components in the context of generalisation in multi-task RL. We investigate the hypothesis that collecting and training on more diverse data from the training environment will improve zero-shot generalisation to new environments/tasks. We motivate mathematically and show empirically that generalisation to states that are "reachable" during training is improved by increasing the diversity of transitions in the replay buffer. Furthermore, we show empirically that this same strategy also shows improvement for generalisation to similar but "unreachable" states and could be due to improved generalisation of latent representations.Comment: 14 pages, 8 figure

    Stochastic Parity Games on Lossy Channel Systems

    Get PDF
    We give an algorithm for solving stochastic parity games with almost-sure winning conditions on lossy channel systems, for the case where the players are restricted to finite-memory strategies. First, we describe a general framework, where we consider the class of 2.5-player games with almost-sure parity winning conditions on possibly infinite game graphs, assuming that the game contains a finite attractor. An attractor is a set of states (not necessarily absorbing) that is almost surely re-visited regardless of the players' decisions. We present a scheme that characterizes the set of winning states for each player. Then, we instantiate this scheme to obtain an algorithm for stochastic game lossy channel systems.Comment: 19 page

    Stochastic Invariants for Probabilistic Termination

    Full text link
    Termination is one of the basic liveness properties, and we study the termination problem for probabilistic programs with real-valued variables. Previous works focused on the qualitative problem that asks whether an input program terminates with probability~1 (almost-sure termination). A powerful approach for this qualitative problem is the notion of ranking supermartingales with respect to a given set of invariants. The quantitative problem (probabilistic termination) asks for bounds on the termination probability. A fundamental and conceptual drawback of the existing approaches to address probabilistic termination is that even though the supermartingales consider the probabilistic behavior of the programs, the invariants are obtained completely ignoring the probabilistic aspect. In this work we address the probabilistic termination problem for linear-arithmetic probabilistic programs with nondeterminism. We define the notion of {\em stochastic invariants}, which are constraints along with a probability bound that the constraints hold. We introduce a concept of {\em repulsing supermartingales}. First, we show that repulsing supermartingales can be used to obtain bounds on the probability of the stochastic invariants. Second, we show the effectiveness of repulsing supermartingales in the following three ways: (1)~With a combination of ranking and repulsing supermartingales we can compute lower bounds on the probability of termination; (2)~repulsing supermartingales provide witnesses for refutation of almost-sure termination; and (3)~with a combination of ranking and repulsing supermartingales we can establish persistence properties of probabilistic programs. We also present results on related computational problems and an experimental evaluation of our approach on academic examples.Comment: Full version of a paper published at POPL 2017. 20 page
    corecore