444 research outputs found
A model checking approach to the parameter estimation of biochemical pathways
Model checking has historically been an important tool to
verify models of a wide variety of systems. Typically a model has to exhibit
certain properties to be classed âacceptableâ. In this work we use
model checking in a new setting; parameter estimation. We characterise
the desired behaviour of a model in a temporal logic property and alter
the model to make it conform to the property (determined through
model checking). We have implemented a computational system called
MC2(GA) which pairs a model checker with a genetic algorithm. To
drive parameter estimation, the fitness of set of parameters in a model is
the inverse of the distance between its actual behaviour and the desired
behaviour. The model checker used is the simulation-based Monte Carlo
Model Checker for Probabilistic Linear-time Temporal Logic with numerical
constraints, MC2(PLTLc). Numerical constraints as well as the
overall probability of the behaviour expressed in temporal logic are used
to minimise the behavioural distance. We define the theory underlying
our parameter estimation approach in both the stochastic and continuous
worlds. We apply our approach to biochemical systems and present
an illustrative example where we estimate the kinetic rate constants in
a continuous model of a signalling pathway
ETUDE MORPHO-GRANULOMETRIQUE ET STRUCTURALE DES SEMOULES DE BLE DUR PROPRIETES D'HYDRATATION ET D'AGGLOMERATION
International audienceLes propriétés d'hydratation et d'agglomération de la semoule de blé dur dépendent des caractéristiques de la matiÚre à granuler, du liquide de granulation et de l'outil de granulation. Dans le procédé de fabrication des graines de couscous, la formation, la croissance et la densification des grains de semoule sont effectuées par addition d'eau, mélange et roulage. Une étude de caractérisation de la semoule de blé dur a été entreprise à différentes échelles d'observation : macroscopique, mésoscopique et moléculaire afin de comprendre les mécanismes d'hydratation et d'agglomération. La semoule de blé dur constitue une population de particules hétérogÚnes en granulométrie et en composition biochimique. Si cette hétérogénéité se traduit par des modifications des propriétés d'hydratation, d'autres études sont nécessaires pour mieux appréhender son influence sur les propriétés d'agglomération
Hybrid Rules with Well-Founded Semantics
A general framework is proposed for integration of rules and external first
order theories. It is based on the well-founded semantics of normal logic
programs and inspired by ideas of Constraint Logic Programming (CLP) and
constructive negation for logic programs. Hybrid rules are normal clauses
extended with constraints in the bodies; constraints are certain formulae in
the language of the external theory. A hybrid program is a pair of a set of
hybrid rules and an external theory. Instances of the framework are obtained by
specifying the class of external theories, and the class of constraints. An
example instance is integration of (non-disjunctive) Datalog with ontologies
formalized as description logics.
The paper defines a declarative semantics of hybrid programs and a
goal-driven formal operational semantics. The latter can be seen as a
generalization of SLS-resolution. It provides a basis for hybrid
implementations combining Prolog with constraint solvers. Soundness of the
operational semantics is proven. Sufficient conditions for decidability of the
declarative semantics, and for completeness of the operational semantics are
given
Trend-based analysis of a population model of the AKAP scaffold protein
We formalise a continuous-time Markov chain with multi-dimensional discrete state space model of the AKAP scaffold protein as a crosstalk mediator between two biochemical signalling pathways. The analysis by temporal properties of the AKAP model requires reasoning about whether the counts of individuals of the same type (species) are increasing or decreasing. For this purpose we propose the concept of stochastic trends based on formulating the probabilities of transitions that increase (resp. decrease) the counts of individuals of the same type, and express these probabilities as formulae such that the state space of the model is not altered. We define a number of stochastic trend formulae (e.g. weakly increasing, strictly increasing, weakly decreasing, etc.) and use them to extend the set of state formulae of Continuous Stochastic Logic. We show how stochastic trends can be implemented in a guarded-command style specification language for transition systems. We illustrate the application of stochastic trends with numerous small examples and then we analyse the AKAP model in order to characterise and show causality and pulsating behaviours in this biochemical system
On Robustness Computation and Optimization in BIOCHAM-4
Long version with appendicesInternational audienceBIOCHAM-4 is a tool for modeling, analyzing and synthesizing biochemical reaction networks with respect to some formal, yet possibly imprecise, specification of their behavior. We focus here on one new capability of this tool to optimize the robustness of a parametric model with respect to a specification of its dynamics in quantitative temporal logic. More precisely, we present two complementary notions of robustness: the statistical notion of model robustness to parameter perturbations, defined as its mean functionality, and a metric notion of formula satisfaction robustness, defined as the penetration depth in the validity domain of the temporal logic constraints. We show how the formula robustness can be used in BIOCHAM-4 with no extra cost as an objective function in the parameter optimization procedure, to actually improve the model robustness. We illustrate these unique features with a classical example of the hybrid systems community and provide some performance figures on a model of MAPK signalling with 37 parameters
Average Case Analysis of Unification Algorithms
International audienceUnification in first-order languages is a central operation in symbolic computation and logic programming. Many unification algorithms have been proposed in the past, however there is no consensus on which algorithm is the best to use in practice. While Paterson and Wegman's linear unification algorithm has the lowest time complexity in the worst case, it requires an important overhead to be implemented. This is true also, although less importantly, for Martelli and Montanari's algorithm [MM82], and Robinson's algorithm [Rob71] is finally retained in many applications despite its exponential worst-case time complexity. There are many explanations for that situation: one important argument is that in practice unification subproblems are not independent, and linear unification algorithms do not perform well on sequences of unify-deunify operations [MU86]. In this paper we present average case complexity theoretic arguments. We first show that the family of unifiable pairs of binary trees is exponentially negligible with respect to the family of arbitrary pairs of binary trees formed over l binary function symbols, c constants and v variables. We analyze the different reasons for failure and get asymptotical and numerical evaluations. We then extend the previous results of [DL89] to these families of trees, we show that a slight modification of Herbrand-Robinson's algorithm has a constant average cost on random pairs of trees. On the other hand, we show that various variants of Martelli and Montanari's algorithm all have a linear average cost on random pairs of trees. The point is that failures by clash are not sufficient to lead to a constant average cost, an efficient occur check (i.e. without a complete traversal of subterms) is necessary. In the last section we extend the results on the probability of the occur check in presence of an unbounded number of variables
Model Revision from Temporal Logic Properties in Computational Systems Biology
International audienceSystems biologists build models of bio-molecular processes from knowledge acquired both at the gene and protein levels, and at the phenotype level through experiments done in wildlife and mutated organisms. In this chapter, we present qualitative and quantitative logic learning tools, and illustrate how they can be useful to the modeler. We focus on biochemical reaction models written in the Systems Biology Markup Language SBML, and interpreted in the Biochemical Abstract Machine BIOCHAM. We first present a model revision algorithm for inferring reaction rules from biological properties expressed in temporal logic. Then we discuss the representations of kinetic models with ordinary differential equations (ODEs) and with stochastic logic programs (SLPs), and describe a parameter search algorithm for finding parameter values satisfying quantitative temporal properties. These methods are illustrated by a simple model of the cell cycle control, and by an application to the modelling of the conditions of synchronization in period of the cell cycle by the circadian cycle
Efficient Parallel Statistical Model Checking of Biochemical Networks
We consider the problem of verifying stochastic models of biochemical
networks against behavioral properties expressed in temporal logic terms. Exact
probabilistic verification approaches such as, for example, CSL/PCTL model
checking, are undermined by a huge computational demand which rule them out for
most real case studies. Less demanding approaches, such as statistical model
checking, estimate the likelihood that a property is satisfied by sampling
executions out of the stochastic model. We propose a methodology for
efficiently estimating the likelihood that a LTL property P holds of a
stochastic model of a biochemical network. As with other statistical
verification techniques, the methodology we propose uses a stochastic
simulation algorithm for generating execution samples, however there are three
key aspects that improve the efficiency: first, the sample generation is driven
by on-the-fly verification of P which results in optimal overall simulation
time. Second, the confidence interval estimation for the probability of P to
hold is based on an efficient variant of the Wilson method which ensures a
faster convergence. Third, the whole methodology is designed according to a
parallel fashion and a prototype software tool has been implemented that
performs the sampling/verification process in parallel over an HPC
architecture
A general computational method for robustness analysis with applications to synthetic gene networks
Motivation: Robustness is the capacity of a system to maintain a function in the face of perturbations. It is essential for the correct functioning of natural and engineered biological systems. Robustness is generally defined in an ad hoc, problem-dependent manner, thus hampering the fruitful development of a theory of biological robustness, recently advocated by Kitano
- âŠ