825 research outputs found

    A mass action model of a fibroblast growth factor signaling pathway and its simplification

    Get PDF
    We consider a kinetic law of mass action model for Fibroblast Growth Factor (FGF) signaling, focusing on the induction of the RAS-MAP kinase pathway via GRB2 binding. Our biologically simple model suffers a combinatorial explosion in the number of differential equations required to simulate the system. In addition to numerically solving the full model, we show that it can be accurately simplified. This requires combining matched asymptotics, the quasi-steady state hypothesis, and the fact subsets of the equations decouple asymptotically. Both the full and simplified models reproduce the qualitative dynamics observed experimentally and in previous stochastic models. The simplified model also elucidates both the qualitative features of GRB2 binding and the complex relationship between SHP2 levels, the rate SHP2 induces dephosphorylation and levels of bound GRB2. In addition to providing insight into the important and redundant features of FGF signaling, such work further highlights the usefulness of numerous simplification techniques in the study of mass action models of signal transduction, as also illustrated recently by Borisov and co-workers (Borisov et al. in Biophys. J. 89, 951–66, 2005, Biosystems 83, 152–66, 2006; Kiyatkin et al. in J. Biol. Chem. 281, 19925–9938, 2006). These developments will facilitate the construction of tractable models of FGF signaling, incorporating further biological realism, such as spatial effects or realistic binding stoichiometries, despite a more severe combinatorial explosion associated with the latter

    Safety-Aware Apprenticeship Learning

    Full text link
    Apprenticeship learning (AL) is a kind of Learning from Demonstration techniques where the reward function of a Markov Decision Process (MDP) is unknown to the learning agent and the agent has to derive a good policy by observing an expert's demonstrations. In this paper, we study the problem of how to make AL algorithms inherently safe while still meeting its learning objective. We consider a setting where the unknown reward function is assumed to be a linear combination of a set of state features, and the safety property is specified in Probabilistic Computation Tree Logic (PCTL). By embedding probabilistic model checking inside AL, we propose a novel counterexample-guided approach that can ensure safety while retaining performance of the learnt policy. We demonstrate the effectiveness of our approach on several challenging AL scenarios where safety is essential.Comment: Accepted by International Conference on Computer Aided Verification (CAV) 201

    Explicit Model Checking of Very Large MDP using Partitioning and Secondary Storage

    Full text link
    The applicability of model checking is hindered by the state space explosion problem in combination with limited amounts of main memory. To extend its reach, the large available capacities of secondary storage such as hard disks can be exploited. Due to the specific performance characteristics of secondary storage technologies, specialised algorithms are required. In this paper, we present a technique to use secondary storage for probabilistic model checking of Markov decision processes. It combines state space exploration based on partitioning with a block-iterative variant of value iteration over the same partitions for the analysis of probabilistic reachability and expected-reward properties. A sparse matrix-like representation is used to store partitions on secondary storage in a compact format. All file accesses are sequential, and compression can be used without affecting runtime. The technique has been implemented within the Modest Toolset. We evaluate its performance on several benchmark models of up to 3.5 billion states. In the analysis of time-bounded properties on real-time models, our method neutralises the state space explosion induced by the time bound in its entirety.Comment: The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-24953-7_1

    Tableaux for Policy Synthesis for MDPs with PCTL* Constraints

    Full text link
    Markov decision processes (MDPs) are the standard formalism for modelling sequential decision making in stochastic environments. Policy synthesis addresses the problem of how to control or limit the decisions an agent makes so that a given specification is met. In this paper we consider PCTL*, the probabilistic counterpart of CTL*, as the specification language. Because in general the policy synthesis problem for PCTL* is undecidable, we restrict to policies whose execution history memory is finitely bounded a priori. Surprisingly, no algorithm for policy synthesis for this natural and expressive framework has been developed so far. We close this gap and describe a tableau-based algorithm that, given an MDP and a PCTL* specification, derives in a non-deterministic way a system of (possibly nonlinear) equalities and inequalities. The solutions of this system, if any, describe the desired (stochastic) policies. Our main result in this paper is the correctness of our method, i.e., soundness, completeness and termination.Comment: This is a long version of a conference paper published at TABLEAUX 2017. It contains proofs of the main results and fixes a bug. See the footnote on page 1 for detail

    Fermi surface of an important nano-sized metastable phase: Al3_3Li

    Full text link
    Nanoscale particles embedded in a metallic matrix are of considerable interest as a route towards identifying and tailoring material properties. We present a detailed investigation of the electronic structure, and in particular the Fermi surface, of a nanoscale phase (L12L1_2 Al3_3Li) that has so far been inaccessible with conventional techniques, despite playing a key role in determining the favorable material properties of the alloy (Al\nobreakdash-9 at. %\nobreakdash-Li). The ordered precipitates only form within the stabilizing Al matrix and do not exist in the bulk; here, we take advantage of the strong positron affinity of Li to directly probe the Fermi surface of Al3_3Li. Through comparison with band structure calculations, we demonstrate that the positron uniquely probes these precipitates, and present a 'tuned' Fermi surface for this elusive phase

    Equilibria-based Probabilistic Model Checking for Concurrent Stochastic Games

    Get PDF
    Probabilistic model checking for stochastic games enables formal verification of systems that comprise competing or collaborating entities operating in a stochastic environment. Despite good progress in the area, existing approaches focus on zero-sum goals and cannot reason about scenarios where entities are endowed with different objectives. In this paper, we propose probabilistic model checking techniques for concurrent stochastic games based on Nash equilibria. We extend the temporal logic rPATL (probabilistic alternating-time temporal logic with rewards) to allow reasoning about players with distinct quantitative goals, which capture either the probability of an event occurring or a reward measure. We present algorithms to synthesise strategies that are subgame perfect social welfare optimal Nash equilibria, i.e., where there is no incentive for any players to unilaterally change their strategy in any state of the game, whilst the combined probabilities or rewards are maximised. We implement our techniques in the PRISM-games tool and apply them to several case studies, including network protocols and robot navigation, showing the benefits compared to existing approaches

    Value Iteration for Long-run Average Reward in Markov Decision Processes

    Full text link
    Markov decision processes (MDPs) are standard models for probabilistic systems with non-deterministic behaviours. Long-run average rewards provide a mathematically elegant formalism for expressing long term performance. Value iteration (VI) is one of the simplest and most efficient algorithmic approaches to MDPs with other properties, such as reachability objectives. Unfortunately, a naive extension of VI does not work for MDPs with long-run average rewards, as there is no known stopping criterion. In this work our contributions are threefold. (1) We refute a conjecture related to stopping criteria for MDPs with long-run average rewards. (2) We present two practical algorithms for MDPs with long-run average rewards based on VI. First, we show that a combination of applying VI locally for each maximal end-component (MEC) and VI for reachability objectives can provide approximation guarantees. Second, extending the above approach with a simulation-guided on-demand variant of VI, we present an anytime algorithm that is able to deal with very large models. (3) Finally, we present experimental results showing that our methods significantly outperform the standard approaches on several benchmarks

    A Study of the PDGF Signaling Pathway with PRISM

    Get PDF
    In this paper, we apply the probabilistic model checker PRISM to the analysis of a biological system -- the Platelet-Derived Growth Factor (PDGF) signaling pathway, demonstrating in detail how this pathway can be analyzed in PRISM. We show that quantitative verification can yield a better understanding of the PDGF signaling pathway.Comment: In Proceedings CompMod 2011, arXiv:1109.104
    • …
    corecore