802 research outputs found

    Evaluation of the Multiplane Method for Efficient Simulations of Reaction Networks

    Full text link
    Reaction networks in the bulk and on surfaces are widespread in physical, chemical and biological systems. In macroscopic systems, which include large populations of reactive species, stochastic fluctuations are negligible and the reaction rates can be evaluated using rate equations. However, many physical systems are partitioned into microscopic domains, where the number of molecules in each domain is small and fluctuations are strong. Under these conditions, the simulation of reaction networks requires stochastic methods such as direct integration of the master equation. However, direct integration of the master equation is infeasible for complex networks, because the number of equations proliferates as the number of reactive species increases. Recently, the multiplane method, which provides a dramatic reduction in the number of equations, was introduced [A. Lipshtat and O. Biham, Phys. Rev. Lett. 93, 170601 (2004)]. The reduction is achieved by breaking the network into a set of maximal fully connected sub-networks (maximal cliques). Lower-dimensional master equations are constructed for the marginal probability distributions associated with the cliques, with suitable couplings between them. In this paper we test the multiplane method and examine its applicability. We show that the method is accurate in the limit of small domains, where fluctuations are strong. It thus provides an efficient framework for the stochastic simulation of complex reaction networks with strong fluctuations, for which rate equations fail and direct integration of the master equation is infeasible. The method also applies in the case of large domains, where it converges to the rate equation results

    Efficient Stochastic Simulations of Complex Reaction Networks on Surfaces

    Full text link
    Surfaces serve as highly efficient catalysts for a vast variety of chemical reactions. Typically, such surface reactions involve billions of molecules which diffuse and react over macroscopic areas. Therefore, stochastic fluctuations are negligible and the reaction rates can be evaluated using rate equations, which are based on the mean-field approximation. However, in case that the surface is partitioned into a large number of disconnected microscopic domains, the number of reactants in each domain becomes small and it strongly fluctuates. This is, in fact, the situation in the interstellar medium, where some crucial reactions take place on the surfaces of microscopic dust grains. In this case rate equations fail and the simulation of surface reactions requires stochastic methods such as the master equation. However, in the case of complex reaction networks, the master equation becomes infeasible because the number of equations proliferates exponentially. To solve this problem, we introduce a stochastic method based on moment equations. In this method the number of equations is dramatically reduced to just one equation for each reactive species and one equation for each reaction. Moreover, the equations can be easily constructed using a diagrammatic approach. We demonstrate the method for a set of astrophysically relevant networks of increasing complexity. It is expected to be applicable in many other contexts in which problems that exhibit analogous structure appear, such as surface catalysis in nanoscale systems, aerosol chemistry in stratospheric clouds and genetic networks in cells

    A Unified Monte Carlo Treatment of Gas-Grain Chemistry for Large Reaction Networks. I. Testing Validity of Rate Equations in Molecular Clouds

    Full text link
    In this study we demonstrate for the first time that the unified Monte Carlo approach can be applied to model gas-grain chemistry in large reaction networks. Specifically, we build a time-dependent gas-grain chemical model of the interstellar medium, involving about 6000 gas-phase and 200 grain surface reactions. This model is used to test the validity of the standard and modified rate equation methods in models of dense and translucent molecular clouds and to specify under which conditions the use of the stochastic approach is desirable. We found that at temperatures 25--30 K gas-phase abundances of H2_2O, NH3_3, CO and many other gas-phase and surface species in the stochastic model differ from those in the deterministic models by more than an order of magnitude, at least, when tunneling is accounted for and/or diffusion energies are 3x lower than the binding energies. In this case, surface reactions, involving light species, proceed faster than accretion of the same species. In contrast, in the model without tunneling and with high binding energies, when the typical timescale of a surface recombination is greater than the timescale of accretion onto the grain, we obtain almost perfect agreement between results of Monte Carlo and deterministic calculations in the same temperature range. At lower temperatures (∼10\sim10 K) gaseous and, in particular, surface abundances of most important molecules are not much affected by stochastic processes.Comment: 33 pages, 9 figures, 1 table. Accepted for publication in Ap

    Quantifying the connectivity of a network: The network correlation function method

    Full text link
    Networks are useful for describing systems of interacting objects, where the nodes represent the objects and the edges represent the interactions between them. The applications include chemical and metabolic systems, food webs as well as social networks. Lately, it was found that many of these networks display some common topological features, such as high clustering, small average path length (small world networks) and a power-law degree distribution (scale free networks). The topological features of a network are commonly related to the network's functionality. However, the topology alone does not account for the nature of the interactions in the network and their strength. Here we introduce a method for evaluating the correlations between pairs of nodes in the network. These correlations depend both on the topology and on the functionality of the network. A network with high connectivity displays strong correlations between its interacting nodes and thus features small-world functionality. We quantify the correlations between all pairs of nodes in the network, and express them as matrix elements in the correlation matrix. From this information one can plot the correlation function for the network and to extract the correlation length. The connectivity of a network is then defined as the ratio between this correlation length and the average path length of the network. Using this method we distinguish between a topological small world and a functional small world, where the latter is characterized by long range correlations and high connectivity. Clearly, networks which share the same topology, may have different connectivities, based on the nature and strength of their interactions. The method is demonstrated on metabolic networks, but can be readily generalized to other types of networks.Comment: 10 figure

    Stochastic Analysis of Dimerization Systems

    Full text link
    The process of dimerization, in which two monomers bind to each other and form a dimer, is common in nature. This process can be modeled using rate equations, from which the average copy numbers of the reacting monomers and of the product dimers can then be obtained. However, the rate equations apply only when these copy numbers are large. In the limit of small copy numbers the system becomes dominated by fluctuations, which are not accounted for by the rate equations. In this limit one must use stochastic methods such as direct integration of the master equation or Monte Carlo simulations. These methods are computationally intensive and rarely succumb to analytical solutions. Here we use the recently introduced moment equations which provide a highly simplified stochastic treatment of the dimerization process. Using this approach, we obtain an analytical solution for the copy numbers and reaction rates both under steady state conditions and in the time-dependent case. We analyze three different dimerization processes: dimerization without dissociation, dimerization with dissociation and hetero-dimer formation. To validate the results we compare them with the results obtained from the master equation in the stochastic limit and with those obtained from the rate equations in the deterministic limit. Potential applications of the results in different physical contexts are discussed.Comment: 10 figure

    Survival Advantage of Both Human Hepatocyte Xenografts and Genome-Edited Hepatocytes for Treatment of α-1 Antitrypsin Deficiency.

    Get PDF
    Hepatocytes represent an important target for gene therapy and editing of single-gene disorders. In α-1 antitrypsin (AAT) deficiency, one missense mutation results in impaired secretion of AAT. In most patients, lung damage occurs due to a lack of AAT-mediated protection of lung elastin from neutrophil elastase. In some patients, accumulation of misfolded PiZ mutant AAT protein triggers hepatocyte injury, leading to inflammation and cirrhosis. We hypothesized that correcting the Z mutant defect in hepatocytes would confer a selective advantage for repopulation of hepatocytes within an intact liver. A human PiZ allele was crossed onto an immune-deficient (NSG) strain to create a recipient strain (NSG-PiZ) for human hepatocyte xenotransplantation. Results indicate that NSG-PiZ recipients support heightened engraftment of normal human primary hepatocytes as compared with NSG recipients. This model can therefore be used to test hepatocyte cell therapies for AATD, but more broadly it serves as a simple, highly reproducible liver xenograft model. Finally, a promoterless adeno-associated virus (AAV) vector, expressing a wild-type AAT and a synthetic miRNA to silence the endogenous allele, was integrated into the albumin locus. This gene-editing approach leads to a selective advantage of edited hepatocytes, by silencing the mutant protein and augmenting normal AAT production, and improvement of the liver pathology. Mol Ther 2017 Nov 1; 25(11):2477-2489

    Survival Advantage of Both Human Hepatocyte Xenografts and Genome-Edited Hepatocytes for Treatment of alpha-1 Antitrypsin Deficiency

    Get PDF
    Hepatocytes represent an important target for gene therapy and editing of single-gene disorders. In alpha-1 antitrypsin (AAT) deficiency, one missense mutation results in impaired secretion of AAT. In most patients, lung damage occurs due to a lack of AAT-mediated protection of lung elastin from neutrophil elastase. In some patients, accumulation of misfolded PiZ mutant AAT protein triggers hepatocyte injury, leading to inflammation and cirrhosis. We hypothesized that correcting the Z mutant defect in hepatocytes would confer a selective advantage for repopulation of hepatocytes within an intact liver. A human PiZ allele was crossed onto an immune-deficient (NSG) strain to create a recipient strain (NSG-PiZ) for human hepatocyte xenotransplantation. Results indicate that NSG-PiZ recipients support heightened engraftment of normal human primary hepatocytes as compared with NSG recipients. This model can therefore be used to test hepatocyte cell therapies for AATD, but more broadly it serves as a simple, highly reproducible liver xenograft model. Finally, a promoterless adeno-associated virus (AAV) vector, expressing a wild-type AAT and a synthetic miRNA to silence the endogenous allele, was integrated into the albumin locus. This gene-editing approach leads to a selective advantage of edited hepatocytes, by silencing the mutant protein and augmenting normal AAT production, and improvement of the liver pathology

    Ice Lines, Planetesimal Composition and Solid Surface Density in the Solar Nebula

    Full text link
    To date, there is no core accretion simulation that can successfully account for the formation of Uranus or Neptune within the observed 2-3 Myr lifetimes of protoplanetary disks. Since solid accretion rate is directly proportional to the available planetesimal surface density, one way to speed up planet formation is to take a full accounting of all the planetesimal-forming solids present in the solar nebula. By combining a viscously evolving protostellar disk with a kinetic model of ice formation, we calculate the solid surface density in the solar nebula as a function of heliocentric distance and time. We find three effects that strongly favor giant planet formation: (1) a decretion flow that brings mass from the inner solar nebula to the giant planet-forming region, (2) recent lab results (Collings et al. 2004) showing that the ammonia and water ice lines should coincide, and (3) the presence of a substantial amount of methane ice in the trans-Saturnian region. Our results show higher solid surface densities than assumed in the core accretion models of Pollack et al. (1996) by a factor of 3 to 4 throughout the trans-Saturnian region. We also discuss the location of ice lines and their movement through the solar nebula, and provide new constraints on the possible initial disk configurations from gravitational stability arguments.Comment: Version 2: reflects lead author's name and affiliation change, contains minor changes to text from version 1. 12 figures, 7 tables, accepted for publication in Icaru
    • …
    corecore