24,531 research outputs found

    Abstract Interpretation for Probabilistic Termination of Biological Systems

    Full text link
    In a previous paper the authors applied the Abstract Interpretation approach for approximating the probabilistic semantics of biological systems, modeled specifically using the Chemical Ground Form calculus. The methodology is based on the idea of representing a set of experiments, which differ only for the initial concentrations, by abstracting the multiplicity of reagents present in a solution, using intervals. In this paper, we refine the approach in order to address probabilistic termination properties. More in details, we introduce a refinement of the abstract LTS semantics and we abstract the probabilistic semantics using a variant of Interval Markov Chains. The abstract probabilistic model safely approximates a set of concrete experiments and reports conservative lower and upper bounds for probabilistic termination

    Answer Set Programming for Non-Stationary Markov Decision Processes

    Full text link
    Non-stationary domains, where unforeseen changes happen, present a challenge for agents to find an optimal policy for a sequential decision making problem. This work investigates a solution to this problem that combines Markov Decision Processes (MDP) and Reinforcement Learning (RL) with Answer Set Programming (ASP) in a method we call ASP(RL). In this method, Answer Set Programming is used to find the possible trajectories of an MDP, from where Reinforcement Learning is applied to learn the optimal policy of the problem. Results show that ASP(RL) is capable of efficiently finding the optimal solution of an MDP representing non-stationary domains

    Quantitative Safety: Linking Proof-Based Verification with Model Checking for Probabilistic Systems

    Full text link
    This paper presents a novel approach for augmenting proof-based verification with performance-style analysis of the kind employed in state-of-the-art model checking tools for probabilistic systems. Quantitative safety properties usually specified as probabilistic system invariants and modeled in proof-based environments are evaluated using bounded model checking techniques. Our specific contributions include the statement of a theorem that is central to model checking safety properties of proof-based systems, the establishment of a procedure; and its full implementation in a prototype system (YAGA) which readily transforms a probabilistic model specified in a proof-based environment to its equivalent verifiable PRISM model equipped with reward structures. The reward structures capture the exact interpretation of the probabilistic invariants and can reveal succinct information about the model during experimental investigations. Finally, we demonstrate the novelty of the technique on a probabilistic library case study

    A New Decision Support Framework for Managing Foot-and-mouth Disease Epidemics

    Get PDF
    Animal disease epidemics such as the foot-and-mouth disease (FMD) pose recurrent threat to countries with intensive livestock production. Efficient FMD control is crucial in limiting the damage of FMD epidemics and securing food production. Decision making in FMD control involves a hierarchy of decisions made at strategic, tactical, and operational levels. These decisions are interdependent and have to be made under uncertainty about future development of the epidemic. Addressing this decision problem, this paper presents a new decision-support framework based on multi-level hierarchic Markov processes (MLHMP). The MLHMP model simultaneously optimizes decisions at strategic, tactical, and operational levels, using Bayesian forecasting methods to model uncertainty and learning about the epidemic. As illustrated by the example, the framework is especially useful in contingency planning for future FMD epidemic

    LTLf/LDLf Non-Markovian Rewards

    Get PDF
    In Markov Decision Processes (MDPs), the reward obtained in a state is Markovian, i.e., depends on the last state and action. This dependency makes it difficult to reward more interesting long-term behaviors, such as always closing a door after it has been opened, or providing coffee only following a request. Extending MDPs to handle non-Markovian reward functions was the subject of two previous lines of work. Both use LTL variants to specify the reward function and then compile the new model back into a Markovian model. Building on recent progress in temporal logics over finite traces, we adopt LDLf for specifying non-Markovian rewards and provide an elegant automata construction for building a Markovian model, which extends that of previous work and offers strong minimality and compositionality guarantees

    Liveness of Randomised Parameterised Systems under Arbitrary Schedulers (Technical Report)

    Full text link
    We consider the problem of verifying liveness for systems with a finite, but unbounded, number of processes, commonly known as parameterised systems. Typical examples of such systems include distributed protocols (e.g. for the dining philosopher problem). Unlike the case of verifying safety, proving liveness is still considered extremely challenging, especially in the presence of randomness in the system. In this paper we consider liveness under arbitrary (including unfair) schedulers, which is often considered a desirable property in the literature of self-stabilising systems. We introduce an automatic method of proving liveness for randomised parameterised systems under arbitrary schedulers. Viewing liveness as a two-player reachability game (between Scheduler and Process), our method is a CEGAR approach that synthesises a progress relation for Process that can be symbolically represented as a finite-state automaton. The method is incremental and exploits both Angluin-style L*-learning and SAT-solvers. Our experiments show that our algorithm is able to prove liveness automatically for well-known randomised distributed protocols, including Lehmann-Rabin Randomised Dining Philosopher Protocol and randomised self-stabilising protocols (such as the Israeli-Jalfon Protocol). To the best of our knowledge, this is the first fully-automatic method that can prove liveness for randomised protocols.Comment: Full version of CAV'16 pape
    • …
    corecore