1,122 research outputs found

    Thermoelectric power in one-dimensional Hubbard model

    Full text link
    The thermoelectric power S is studied within the one-dimensional Hubbard model using the linear response theory and the numerical exact-diagonalization method for small systems. While both the diagonal and off-diagonal dynamical correlation functions of particle and energy current are singular within the model even at temperature T>0, S behaves regularly as a function of frequency ω\omega and T. Dependence on the electron density n below the half-filling reveals a change of sign of S at n_0=0.73+/-0.07 due to strong correlations, in the whole T range considered. Approaching half-filling S is hole-like and can become large for U>>t although decreasing with T.Comment: 6 pages, 4 figure

    Stochastic Simulations of the Repressilator Circuit

    Full text link
    The genetic repressilator circuit consists of three transcription factors, or repressors, which negatively regulate each other in a cyclic manner. This circuit was synthetically constructed on plasmids in {\it Escherichia coli} and was found to exhibit oscillations in the concentrations of the three repressors. Since the repressors and their binding sites often appear in low copy numbers, the oscillations are noisy and irregular. Therefore, the repressilator circuit cannot be fully analyzed using deterministic methods such as rate-equations. Here we perform stochastic analysis of the repressilator circuit using the master equation and Monte Carlo simulations. It is found that fluctuations modify the range of conditions in which oscillations appear as well as their amplitude and period, compared to the deterministic equations. The deterministic and stochastic approaches coincide only in the limit in which all the relevant components, including free proteins, plasmids and bound proteins, appear in high copy numbers. We also find that subtle features such as cooperative binding and bound-repressor degradation strongly affect the existence and properties of the oscillations.Comment: Accepted to PR

    Basic and applied uses of genome-scale metabolic network reconstructions of Escherichia coli.

    Get PDF
    The genome-scale model (GEM) of metabolism in the bacterium Escherichia coli K-12 has been in development for over a decade and is now in wide use. GEM-enabled studies of E. coli have been primarily focused on six applications: (1) metabolic engineering, (2) model-driven discovery, (3) prediction of cellular phenotypes, (4) analysis of biological network properties, (5) studies of evolutionary processes, and (6) models of interspecies interactions. In this review, we provide an overview of these applications along with a critical assessment of their successes and limitations, and a perspective on likely future developments in the field. Taken together, the studies performed over the past decade have established a genome-scale mechanistic understanding of genotype–phenotype relationships in E. coli metabolism that forms the basis for similar efforts for other microbial species. Future challenges include the expansion of GEMs by integrating additional cellular processes beyond metabolism, the identification of key constraints based on emerging data types, and the development of computational methods able to handle such large-scale network models with sufficient accuracy

    Using weak values to experimentally determine "negative probabilities" in a two-photon state with Bell correlations

    Full text link
    Bipartite quantum entangled systems can exhibit measurement correlations that violate Bell inequalities, revealing the profoundly counter-intuitive nature of the physical universe. These correlations reflect the impossibility of constructing a joint probability distribution for all values of all the different properties observed in Bell inequality tests. Physically, the impossibility of measuring such a distribution experimentally, as a set of relative frequencies, is due to the quantum back-action of projective measurements. Weakly coupling to a quantum probe, however, produces minimal back-action, and so enables a weak measurement of the projector of one observable, followed by a projective measurement of a non-commuting observable. By this technique it is possible to empirically measure weak-valued probabilities for all of the values of the observables relevant to a Bell test. The marginals of this joint distribution, which we experimentally determine, reproduces all of the observable quantum statistics including a violation of the Bell inequality, which we independently measure. This is possible because our distribution, like the weak values for projectors on which it is built, is not constrained to the interval [0, 1]. It was first pointed out by Feynman that, for explaining singlet-state correlations within "a [local] hidden variable view of nature ... everything works fine if we permit negative probabilities". However, there are infinitely many such theories. Our method, involving "weak-valued probabilities", singles out a unique set of probabilities, and moreover does so empirically.Comment: 9 pages, 3 figure

    Quantifying the connectivity of a network: The network correlation function method

    Full text link
    Networks are useful for describing systems of interacting objects, where the nodes represent the objects and the edges represent the interactions between them. The applications include chemical and metabolic systems, food webs as well as social networks. Lately, it was found that many of these networks display some common topological features, such as high clustering, small average path length (small world networks) and a power-law degree distribution (scale free networks). The topological features of a network are commonly related to the network's functionality. However, the topology alone does not account for the nature of the interactions in the network and their strength. Here we introduce a method for evaluating the correlations between pairs of nodes in the network. These correlations depend both on the topology and on the functionality of the network. A network with high connectivity displays strong correlations between its interacting nodes and thus features small-world functionality. We quantify the correlations between all pairs of nodes in the network, and express them as matrix elements in the correlation matrix. From this information one can plot the correlation function for the network and to extract the correlation length. The connectivity of a network is then defined as the ratio between this correlation length and the average path length of the network. Using this method we distinguish between a topological small world and a functional small world, where the latter is characterized by long range correlations and high connectivity. Clearly, networks which share the same topology, may have different connectivities, based on the nature and strength of their interactions. The method is demonstrated on metabolic networks, but can be readily generalized to other types of networks.Comment: 10 figure

    The simplest demonstrations of quantum nonlocality

    Get PDF
    We investigate the complexity cost of demonstrating the key types of nonclassical correlations-Bell inequality violation, Einstein, Podolsky, Rosen (EPR)-steering, and entanglement-with independent agents, theoretically and in a photonic experiment. We show that the complexity cost exhibits a hierarchy among these three tasks, mirroring the recently discovered hierarchy for how robust they are to noise. For Bell inequality violations, the simplest test is the well-known Clauser-Horne-Shimony-Holt test, but for EPR-steering and entanglement the tests that involve the fewest number of detection patterns require nonprojective measurements. The simplest EPR-steering test requires a choice of projective measurement for one agent and a single nonprojective measurement for the other, while the simplest entanglement test uses just a single nonprojective measurement for each agent. In both of these cases, we derive our inequalities using the concept of circular two-designs. This leads to the interesting feature that in our photonic demonstrations, the correlation of interest is independent of the angle between the linear polarizers used by the two parties, which thus require no alignment

    Stochastic Analysis of Dimerization Systems

    Full text link
    The process of dimerization, in which two monomers bind to each other and form a dimer, is common in nature. This process can be modeled using rate equations, from which the average copy numbers of the reacting monomers and of the product dimers can then be obtained. However, the rate equations apply only when these copy numbers are large. In the limit of small copy numbers the system becomes dominated by fluctuations, which are not accounted for by the rate equations. In this limit one must use stochastic methods such as direct integration of the master equation or Monte Carlo simulations. These methods are computationally intensive and rarely succumb to analytical solutions. Here we use the recently introduced moment equations which provide a highly simplified stochastic treatment of the dimerization process. Using this approach, we obtain an analytical solution for the copy numbers and reaction rates both under steady state conditions and in the time-dependent case. We analyze three different dimerization processes: dimerization without dissociation, dimerization with dissociation and hetero-dimer formation. To validate the results we compare them with the results obtained from the master equation in the stochastic limit and with those obtained from the rate equations in the deterministic limit. Potential applications of the results in different physical contexts are discussed.Comment: 10 figure
    corecore