180 research outputs found

    Two-Dimensional Electronic Spectroscopy of Chlorophyll a: Solvent Dependent Spectral Evolution

    Get PDF
    The interaction of the monomeric chlorophyll Q-band electronic transition with solvents of differing physical-chemical properties is investigated through two-dimensional electronic spectroscopy (2DES). Chlorophyll constitutes the key chromophore molecule in light harvesting complexes. It is well-known that the surrounding protein in the light harvesting complex fine-tunes chlorophyll electronic transitions to optimize energy transfer. Therefore, an understanding of the influence of the environment on the monomeric chlorophyll electronic transitions is important. The Q-band 2DES is inhomogeneous at early times, particularly in hydrogen bonding polar solvents, but also in nonpolar solvents like cyclohexane. Interestingly this inhomogeneity persists for long times, even up to the nanosecond time scale in some solvents. The reshaping of the 2DES occurs over multiple time scales and was assigned mainly to spectral diffusion. At early times the reshaping is Gaussian-like, hinting at a strong solvent reorganization effect. The temporal evolution of the 2DES response was analyzed in terms of a Brownian oscillator model. The spectral densities underpinning the Brownian oscillator fitting were recovered for the different solvents. The absorption spectra and Stokes shift were also properly described by this model. The extent and nature of inhomogeneous broadening was a strong function of solvent, being larger in H-bonding and viscous media and smaller in nonpolar solvents. The fastest spectral reshaping components were assigned to solvent dynamics, modified by interactions with the solute

    ALC: automated reduction of rule-based models

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously.</p> <p>Results</p> <p>ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, <it>Mathematica </it>and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website.</p> <p>Conclusion</p> <p>ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files.</p

    Rule-based modeling of biochemical systems with BioNetGen

    Get PDF
    Totowa, NJ. Please cite this article when referencing BioNetGen in future publications. Rule-based modeling involves the representation of molecules as structured objects and molecular interactions as rules for transforming the attributes of these objects. The approach is notable in that it allows one to systematically incorporate site-specific details about proteinprotein interactions into a model for the dynamics of a signal-transduction system, but the method has other applications as well, such as following the fates of individual carbon atoms in metabolic reactions. The consequences of protein-protein interactions are difficult to specify and track with a conventional modeling approach because of the large number of protein phosphoforms and protein complexes that these interactions potentially generate. Here, we focus on how a rule-based model is specified in the BioNetGen language (BNGL) and how a model specification is analyzed using the BioNetGen software tool. We also discuss new developments in rule-based modeling that should enable the construction and analyses of comprehensive models for signal transduction pathways and similarly large-scale models for other biochemical systems. Key Words: Computational systems biology; mathematical modeling; combinatorial complexity; software; formal languages; stochastic simulation; ordinary differential equations; protein-protein interactions; signal transduction; metabolic networks. 1

    Exact Hybrid Particle/Population Simulation of Rule-Based Models of Biochemical Systems

    Get PDF
    Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility. © 2014 Hogg et al

    Efficient Syntax-Driven Lumping of Differential Equations

    Get PDF
    We present an algorithm to compute exact aggregations of a class of systems of ordinary differential equations (ODEs). Our approach consists in an extension of Paige and Tarjan’s seminal solution to the coarsest refinement problem by encoding an ODE system into a suitable discrete-state representation. In particular, we consider a simple extension of the syntax of elementary chemical reaction networks because (i) it can express ODEs with derivatives given by polynomials of degree at most two, which are relevant in many applications in natural sciences and engineering; and (ii) we can build on two recently introduced bisimulations, which yield two complementary notions of ODE lumping. Our algorithm computes the largest bisimulations in O(r⋅s⋅logs)O(r⋅s⋅log⁡s) time, where r is the number of monomials and s is the number of variables in the ODEs. Numerical experiments on real-world models from biochemistry, electrical engineering, and structural mechanics show that our prototype is able to handle ODEs with millions of variables and monomials, providing significant model reductions

    Exact model reduction of combinatorial reaction networks

    Get PDF
    Receptors and scaffold proteins usually possess a high number of distinct binding domains inducing the formation of large multiprotein signaling complexes. Due to combinatorial reasons the number of distinguishable species grows exponentially with the number of binding domains and can easily reach several millions. Even by including only a limited number of components and binding domains the resulting models are very large and hardly manageable. A novel model reduction technique allows the significant reduction and modularization of these models

    A Role for Rebinding in Rapid and Reliable T Cell Responses to Antigen

    Get PDF
    Experimental work has shown that T cells of the immune system rapidly and specifically respond to antigenic molecules presented on the surface of antigen-presenting-cells and are able to discriminate between potential stimuli based on the kinetic parameters of the T cell receptor-antigen bond. These antigenic molecules are presented among thousands of chemically similar endogenous peptides, raising the question of how T cells can reliably make a decision to respond to certain antigens but not others within minutes of encountering an antigen presenting cell. In this theoretical study, we investigate the role of localized rebinding between a T cell receptor and an antigen. We show that by allowing the signaling state of individual receptors to persist during brief unbinding events, T cells are able to discriminate antigens based on both their unbinding and rebinding rates. We demonstrate that T cell receptor coreceptors, but not receptor clustering, are important in promoting localized rebinding, and show that requiring rebinding for productive signaling reduces signals from a high concentration of endogenous pMHC. In developing our main results, we use a relatively simple model based on kinetic proofreading. However, we additionally show that all our results are recapitulated when we use a detailed T cell receptor signaling model. We discuss our results in the context of existing models and recent experimental work and propose new experiments to test our findings

    Mathematical models for immunology:current state of the art and future research directions

    Get PDF
    The advances in genetics and biochemistry that have taken place over the last 10 years led to significant advances in experimental and clinical immunology. In turn, this has led to the development of new mathematical models to investigate qualitatively and quantitatively various open questions in immunology. In this study we present a review of some research areas in mathematical immunology that evolved over the last 10 years. To this end, we take a step-by-step approach in discussing a range of models derived to study the dynamics of both the innate and immune responses at the molecular, cellular and tissue scales. To emphasise the use of mathematics in modelling in this area, we also review some of the mathematical tools used to investigate these models. Finally, we discuss some future trends in both experimental immunology and mathematical immunology for the upcoming years

    SBML Level 3: an extensible format for the exchange and reuse of biological models

    Get PDF
    Abstract Systems biology has experienced dramatic growth in the number, size, and complexity of computational models. To reproduce simulation results and reuse models, researchers must exchange unambiguous model descriptions. We review the latest edition of the Systems Biology Markup Language (SBML), a format designed for this purpose. A community of modelers and software authors developed SBML Level 3 over the past decade. Its modular form consists of a core suited to representing reaction‐based models and packages that extend the core with features suited to other model types including constraint‐based models, reaction‐diffusion models, logical network models, and rule‐based models. The format leverages two decades of SBML and a rich software ecosystem that transformed how systems biologists build and interact with models. More recently, the rise of multiscale models of whole cells and organs, and new data sources such as single‐cell measurements and live imaging, has precipitated new ways of integrating data with models. We provide our perspectives on the challenges presented by these developments and how SBML Level 3 provides the foundation needed to support this evolution
    corecore