8,795 research outputs found

    A flexible architecture for modeling and simulation of diffusional association

    Full text link
    Up to now, it is not possible to obtain analytical solutions for complex molecular association processes (e.g. Molecule recognition in Signaling or catalysis). Instead Brownian Dynamics (BD) simulations are commonly used to estimate the rate of diffusional association, e.g. to be later used in mesoscopic simulations. Meanwhile a portfolio of diffusional association (DA) methods have been developed that exploit BD. However, DA methods do not clearly distinguish between modeling, simulation, and experiment settings. This hampers to classify and compare the existing methods with respect to, for instance model assumptions, simulation approximations or specific optimization strategies for steering the computation of trajectories. To address this deficiency we propose FADA (Flexible Architecture for Diffusional Association) - an architecture that allows the flexible definition of the experiment comprising a formal description of the model in SpacePi, different simulators, as well as validation and analysis methods. Based on the NAM (Northrup-Allison-McCammon) method, which forms the basis of many existing DA methods, we illustrate the structure and functioning of FADA. A discussion of future validation experiments illuminates how the FADA can be exploited in order to estimate reaction rates and how validation techniques may be applied to validate additional features of the model

    Discrete diffusion models to study the effects of Mg2+ concentration on the PhoPQ signal transduction system

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The challenge today is to develop a modeling and simulation paradigm that integrates structural, molecular and genetic data for a quantitative understanding of physiology and behavior of biological processes at multiple scales. This modeling method requires techniques that maintain a reasonable accuracy of the biological process and also reduces the computational overhead. This objective motivates the use of new methods that can transform the problem from energy and affinity based modeling to information theory based modeling. To achieve this, we transform all dynamics within the cell into a random event time, which is specified through an information domain measure like probability distribution. This allows us to use the ā€œin silicoā€ stochastic event based modeling approach to find the molecular dynamics of the system.</p> <p>Results</p> <p>In this paper, we present the discrete event simulation concept using the example of the signal transduction cascade triggered by extra-cellular <it>Mg</it><sup>2+</sup> concentration in the two component PhoPQ regulatory system of Salmonella Typhimurium. We also present a model to compute the information domain measure of the molecular transport process by estimating the statistical parameters of inter-arrival time between molecules/ions coming to a cell receptor as external signal. This model transforms the diffusion process into the information theory measure of stochastic event completion time to get the distribution of the <it>Mg</it><sup>2+</sup> departure events. Using these molecular transport models, we next study the in-silico effects of this external trigger on the PhoPQ system.</p> <p>Conclusions</p> <p>Our results illustrate the accuracy of the proposed diffusion models in explaining the molecular/ionic transport processes inside the cell. Also, the proposed simulation framework can incorporate the stochasticity in cellular environments to a certain degree of accuracy. We expect that this scalable simulation platform will be able to model more complex biological systems with reasonable accuracy to understand their temporal dynamics.</p

    Approximation of event probabilities in noisy cellular processes

    Get PDF
    Molecular noise, which arises from the randomness of the discrete events in the cell, significantly influences fundamental biological processes. Discrete-state continuous-time stochastic models (CTMC) can be used to describe such effects, but the calculation of the probabilities of certain events is computationally expensive. We present a comparison of two analysis approaches for CTMC. On one hand, we estimate the probabilities of interest using repeated Gillespie simulation and determine the statistical accuracy that we obtain. On the other hand, we apply a numerical reachability analysis that approximates the probability distributions of the system at several time instances. We use examples of cellular processes to demonstrate the superiority of the reachability analysis if accurate results are required

    A model checking approach to the parameter estimation of biochemical pathways

    Get PDF
    Model checking has historically been an important tool to verify models of a wide variety of systems. Typically a model has to exhibit certain properties to be classed ā€˜acceptableā€™. In this work we use model checking in a new setting; parameter estimation. We characterise the desired behaviour of a model in a temporal logic property and alter the model to make it conform to the property (determined through model checking). We have implemented a computational system called MC2(GA) which pairs a model checker with a genetic algorithm. To drive parameter estimation, the fitness of set of parameters in a model is the inverse of the distance between its actual behaviour and the desired behaviour. The model checker used is the simulation-based Monte Carlo Model Checker for Probabilistic Linear-time Temporal Logic with numerical constraints, MC2(PLTLc). Numerical constraints as well as the overall probability of the behaviour expressed in temporal logic are used to minimise the behavioural distance. We define the theory underlying our parameter estimation approach in both the stochastic and continuous worlds. We apply our approach to biochemical systems and present an illustrative example where we estimate the kinetic rate constants in a continuous model of a signalling pathway

    Learning stable and predictive structures in kinetic systems: Benefits of a causal approach

    Get PDF
    Learning kinetic systems from data is one of the core challenges in many fields. Identifying stable models is essential for the generalization capabilities of data-driven inference. We introduce a computationally efficient framework, called CausalKinetiX, that identifies structure from discrete time, noisy observations, generated from heterogeneous experiments. The algorithm assumes the existence of an underlying, invariant kinetic model, a key criterion for reproducible research. Results on both simulated and real-world examples suggest that learning the structure of kinetic systems benefits from a causal perspective. The identified variables and models allow for a concise description of the dynamics across multiple experimental settings and can be used for prediction in unseen experiments. We observe significant improvements compared to well established approaches focusing solely on predictive performance, especially for out-of-sample generalization

    Aerospace Medicine and Biology: A continuing bibliography, supplement 191

    Get PDF
    A bibliographical list of 182 reports, articles, and other documents introduced into the NASA scientific and technical information system in February 1979 is presented

    Techniques for automated parameter estimation in computational models of probabilistic systems

    Get PDF
    The main contribution of this dissertation is the design of two new algorithms for automatically synthesizing values of numerical parameters of computational models of complex stochastic systems such that the resultant model meets user-specified behavioral specifications. These algorithms are designed to operate on probabilistic systems ā€“ systems that, in general, behave differently under identical conditions. The algorithms work using an approach that combines formal verification and mathematical optimization to explore a model\u27s parameter space. The problem of determining whether a model instantiated with a given set of parameter values satisfies the desired specification is first defined using formal verification terminology, and then reformulated in terms of statistical hypothesis testing. Parameter space exploration involves determining the outcome of the hypothesis testing query for each parameter point and is guided using simulated annealing. The first algorithm uses the sequential probability ratio test (SPRT) to solve the hypothesis testing problems, whereas the second algorithm uses an approach based on Bayesian statistical model checking (BSMC). The SPRT-based parameter synthesis algorithm was used to validate that a given model of glucose-insulin metabolism has the capability of representing diabetic behavior by synthesizing values of three parameters that ensure that the glucose-insulin subsystem spends at least 20 minutes in a diabetic scenario. The BSMC-based algorithm was used to discover the values of parameters in a physiological model of the acute inflammatory response that guarantee a set of desired clinical outcomes. These two applications demonstrate how our algorithms use formal verification, statistical hypothesis testing and mathematical optimization to automatically synthesize parameters of complex probabilistic models in order to meet user-specified behavioral propertie
    • ā€¦
    corecore