22 research outputs found

    Analysis of signalling pathways using the prism model checker

    Get PDF
    We describe a new modelling and analysis approach for signal transduction networks in the presence of incomplete data. We illustrate the approach with an example, the RKIP inhibited ERK pathway [1]. Our models are based on high level descriptions of continuous time Markov chains: reactions are modelled as synchronous processes and concentrations are modelled by discrete, abstract quantities. The main advantage of our approach is that using a (continuous time) stochastic logic and the PRISM model checker, we can perform quantitative analysis of queries such as if a concentration reaches a certain level, will it remain at that level thereafter? We also perform standard simulations and compare our results with a traditional ordinary differential equation model. An interesting result is that for the example pathway, only a small number of discrete data values is required to render the simulations practically indistinguishable

    A two-loop optimization strategy for multi-objective optimal experimental design

    Get PDF
    A new strategy of optimal experimental design (OED) is proposed for a kinetically controlled synthesis system by considering both observation design and input design. The observation design that combines sampling scheduling and measurement set selection is treated as a single optimization problem arranged in the inner loop, while the optimization of input intensity is calculated in the outer loop. This multi-objective dynamic optimization problem is solved via the integration of particle swarm algorithm (for the outer loop) and the interior-point method (for the inner loop). Numerical studies demonstrate the efficiency of this optimization strategy and show the effectiveness of this integrated OED in reducing parameter estimation uncertainties. In addition, process optimization of the case study enzyme reaction system is investigated with the aim to obtain maximum production rate by taking into account of the experimental cost

    Simulation of a Petri net-based Model of the Terpenoid Biosynthesis Pathway

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The development and simulation of dynamic models of terpenoid biosynthesis has yielded a systems perspective that provides new insights into how the structure of this biochemical pathway affects compound synthesis. These insights may eventually help identify reactions that could be experimentally manipulated to amplify terpenoid production. In this study, a dynamic model of the terpenoid biosynthesis pathway was constructed based on the Hybrid Functional Petri Net (HFPN) technique. This technique is a fusion of three other extended Petri net techniques, namely Hybrid Petri Net (HPN), Dynamic Petri Net (HDN) and Functional Petri Net (FPN).</p> <p>Results</p> <p>The biological data needed to construct the terpenoid metabolic model were gathered from the literature and from biological databases. These data were used as building blocks to create an HFPNe model and to generate parameters that govern the global behaviour of the model. The dynamic model was simulated and validated against known experimental data obtained from extensive literature searches. The model successfully simulated metabolite concentration changes over time (pt) and the observations correlated with known data. Interactions between the intermediates that affect the production of terpenes could be observed through the introduction of inhibitors that established feedback loops within and crosstalk between the pathways.</p> <p>Conclusions</p> <p>Although this metabolic model is only preliminary, it will provide a platform for analysing various high-throughput data, and it should lead to a more holistic understanding of terpenoid biosynthesis.</p

    A rigorous approach to facilitate and guarantee the correctness of the genetic testing management in human genome information systems

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes.</p> <p>Results</p> <p>This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra.</p> <p>Conclusions</p> <p>This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.</p

    Smart Environments for Collaborative Design, Implementation, and Interpretation of Scientific Experiments

    Get PDF
    Ambient intelligence promises to enable humans to smoothly interact with their environment, mediated by computer technology. In the literature on ambient intelligence, empirical scientists are not often mentioned. Yet they form an interesting target group for this technology. In this position paper, we describe a project aimed at realising an ambient intelligence environment for face-to-face meetings of researchers with different academic backgrounds involved in molecular biology “omics” experiments. In particular, microarray experiments are a focus of attention because these experiments require multidisciplinary collaboration for their design, analysis, and interpretation. Such an environment is characterised by a high degree of complexity that has to be mitigated by ambient intelligence technology. By experimenting in a real-life setting, we will learn more about life scientists as a user group

    Ontology-based instance data validation for high-quality curated biological pathways

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Modeling in systems biology is vital for understanding the complexity of biological systems across scales and predicting system-level behaviors. To obtain high-quality pathway databases, it is essential to improve the efficiency of model validation and model update based on appropriate feedback.</p> <p>Results</p> <p>We have developed a new method to guide creating novel high-quality biological pathways, using a rule-based validation. Rules are defined to correct models against biological semantics and improve models for dynamic simulation. In this work, we have defined 40 rules which constrain event-specific participants and the related features and adding missing processes based on biological events. This approach is applied to data in Cell System Ontology which is a comprehensive ontology that represents complex biological pathways with dynamics and visualization. The experimental results show that the relatively simple rules can efficiently detect errors made during curation, such as misassignment and misuse of ontology concepts and terms in curated models.</p> <p>Conclusions</p> <p>A new rule-based approach has been developed to facilitate model validation and model complementation. Our rule-based validation embedding biological semantics enables us to provide high-quality curated biological pathways. This approach can serve as a preprocessing step for model integration, exchange and extraction data, and simulation.</p

    Putative cold acclimation pathways in Arabidopsis thaliana identified by a combined analysis of mRNA co-expression patterns, promoter motifs and transcription factors

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>With the advent of microarray technology, it has become feasible to identify virtually all genes in an organism that are induced by developmental or environmental changes. However, relying solely on gene expression data may be of limited value if the aim is to infer the underlying genetic networks. Development of computational methods to combine microarray data with other information sources is therefore necessary. Here we describe one such method.</p> <p>Results</p> <p>By means of our method, previously published Arabidopsis microarray data from cold acclimated plants at six different time points, promoter motif sequence data extracted from ~24,000 Arabidopsis promoters and known transcription factor binding sites were combined to construct a putative genetic regulatory interaction network. The inferred network includes both previously characterised and hitherto un-described regulatory interactions between transcription factor (TF) genes and genes that encode other TFs or other proteins. Part of the obtained transcription factor regulatory network is presented here. More detailed information is available in the additional files.</p> <p>Conclusion</p> <p>The rule-based method described here can be used to infer genetic networks by combining data from microarrays, promoter sequences and known promoter binding sites. This method should in principle be applicable to any biological system. We tested the method on the cold acclimation process in Arabidopsis and could identify a more complex putative genetic regulatory network than previously described. However, it should be noted that information on specific binding sites for individual TFs were in most cases not available. Thus, gene targets for the entire TF gene families were predicted. In addition, the networks were built solely by a bioinformatics approach and experimental verifications will be necessary for their final validation. On the other hand, since our method highlights putative novel interactions, more directed experiments could now be performed.</p

    Petri Net computational modelling of Langerhans cell Interferon Regulatory Factor Network predicts their role in T cell activation

    Get PDF
    Langerhans cells (LCs) are able to orchestrate adaptive immune responses in the skin by interpreting the microenvironmental context in which they encounter foreign substances, but the regulatory basis for this has not been established. Utilising systems immunology approaches combining in silico modelling of a reconstructed gene regulatory network (GRN) with in vitro validation of the predictions, we sought to determine the mechanisms of regulation of immune responses in human primary LCs. The key role of Interferon regulatory factors (IRFs) as controllers of the human Langerhans cell response to epidermal cytokines was revealed by whole transcriptome analysis. Applying Boolean logic we assembled a Petri net-based model of the IRF-GRN which provides molecular pathway predictions for the induction of different transcriptional programmes in LCs. In silico simulations performed after model parameterisation with transcription factor expression values predicted that human LC activation of antigen-specific CD8 T cells would be differentially regulated by epidermal cytokine induction of specific IRF-controlled pathways. This was confirmed by in vitro measurement of IFN-g production by activated T cells. As a proof of concept, this approach shows that stochastic modelling of a specific immune networks renders transcriptome data valuable for the prediction of functional outcomes of immune responses

    Modélisation et simulation de processus de biologie moléculaire basée sur les réseaux de Pétri : une revue de littérature

    Get PDF
    Les réseaux de Pétri sont une technique de simulation à événements discrets développée pour la représentation de systèmes et plus particulièrement de leurs propriétés de concurrence et de synchronisation. Différentes extensions à la théorie initiale de cette méthode ont été utilisées pour la modélisation de processus de biologie moléculaire et de réseaux métaboliques. Il s’agit des extensions stochastiques, colorées, hybrides et fonctionnelles. Ce document fait une première revue des différentes approches qui ont été employées et des systèmes biologiques qui ont été modélisés grâce à celles-ci. De plus, le contexte d’application et les objectif s de modélisation de chacune sont discutés
    corecore