3,263 research outputs found

    Nonlinear Phenomena in Canonical Stochastic Quantization

    Full text link
    Stochastic quantization provides a connection between quantum field theory and statistical mechanics, with applications especially in gauge field theories. Euclidean quantum field theory is viewed as the equilibrium limit of a statistical system coupled to a thermal reservoir. Nonlinear phenomena in stochastic quantization arise when employing nonlinear Brownian motion as an underlying stochastic process. We discuss a novel formulation of the Higgs mechanism in QED.Comment: 8 pages, invited talk at the International Workshop ``Critical Phenomena and Diffusion in Complex Systems'', Dec. 5-7, 2006, Nizhni Novgorod, Russi

    Agent-based modeling of intracellular transport

    Get PDF
    We develop an agent-based model of the motion and pattern formation of vesicles. These intracellular particles can be found in four different modes of (undirected and directed) motion and can fuse with other vesicles. While the size of vesicles follows a log-normal distribution that changes over time due to fusion processes, their spatial distribution gives rise to distinct patterns. Their occurrence depends on the concentration of proteins which are synthesized based on the transcriptional activities of some genes. Hence, differences in these spatio-temporal vesicle patterns allow indirect conclusions about the (unknown) impact of these genes. By means of agent-based computer simulations we are able to reproduce such patterns on real temporal and spatial scales. Our modeling approach is based on Brownian agents with an internal degree of freedom, Īø, that represents the different modes of motion. Conditions inside the cell are modeled by an effective potential that differs for agents dependent on their value Īø. Agent's motion in this effective potential is modeled by an overdampted Langevin equation, changes of Īø are modeled as stochastic transitions with values obtained from experiments, and fusion events are modeled as space-dependent stochastic transitions. Our results for the spatio-temporal vesicle patterns can be used for a statistical comparison with experiments. We also derive hypotheses of how the silencing of some genes may affect the intracellular transport, and point to generalizations of the mode

    The Efficiency and Evolution of R&D Networks

    Get PDF
    This work introduces a new model to investigate the efficiency and evolution of networks of firms exchanging knowledge in R&D partnerships. We first examine the efficiency of a given network structure in terms of the maximization of total profits in the industry. We show that the efficient network structure depends on the marginal cost of collaboration. When the marginal cost is low, the complete graph is efficient. However, a high marginal cost implies that the efficient network is sparser and has a core-periphery structure. Next, we examine the evolution of the network struc- ture when the decision on collaborating partners is decentralized. We show the existence of mul- tiple equilibrium structures which are in general inefficient. This is due to (i) the path dependent character of the partner selection process, (ii) the presence of knowledge externalities and (iii) the presence of severance costs involved in link deletion. Finally, we study the properties of the emerg- ing equilibrium networks and we show that they are coherent with the stylized facts of R&D net- works.R&D networks, technology spillovers, network efficiency, network formation

    Non-equilibrium dynamics of an active colloidal "chucker"

    Full text link
    We report Monte Carlo simulations of the dynamics of a "chucker": a colloidal particle which emits smaller solute particles from its surface, isotropically and at a constant rate k_c. We find that the diffusion constant of the chucker increases for small k_c, as recently predicted theoretically. At large k_c the chucker diffuses more slowly due to crowding effects. We compare our simulation results to those of a "point particle" Langevin dynamics scheme in which the solute concentration field is calculated analytically, and in which hydrodynamic effects can be included albeit in an approximate way. By simulating the dragging of a chucker, we obtain an estimate of its apparent mobility coefficient which violates the fluctuation-dissipation theorem. We also characterise the probability density profile for a chucker which sediments onto a surface which either repels or absorbs the solute particles, and find that the steady state distributions are very different in the two cases. Our simulations are inspired by the biological example of exopolysaccharide-producing bacteria, as well as by recent experimental, simulation and theoretical work on phoretic colloidal "swimmers".Comment: re-submission after referee's comment

    Testing an agent-based model of bacterial cell motility: How nutrient concentration affects speed distribution

    Get PDF
    We revisit a recently proposed agent-based model of active biological motion and compare its predictions with own experimental findings for the speed distribution of bacterial cells, Salmonella typhimurium. Agents move according to a stochastic dynamics and use energy stored in an internal depot for metabolism and active motion. We discuss different assumptions of how the conversion from internal to kinetic energy d(v) may depend on the actual speed, to conclude that d 2 v Ī¾ with either Ī¾ = 2 or 1 < Ī¾ < 2 are promising hypotheses. To test these, we compare the model's prediction with the speed distribution of bacteria which were obtained in media of different nutrient concentration and at different times. We find that both hypotheses are in line with the experimental observations, with Ī¾ between 1.67 and 2.0. Regarding the influence of a higher nutrient concentration, we conclude that the take-up of energy by bacterial cells is indeed increased. But this energy is not used to increase the speed, with 40Ī¼m/s as the most probable value of the speed distribution, but is rather spend on metabolism and growt

    Agent-Based Modeling of Intracellular Transport

    Full text link
    We develop an agent-based model of the motion and pattern formation of vesicles. These intracellular particles can be found in four different modes of (undirected and directed) motion and can fuse with other vesicles. While the size of vesicles follows a log-normal distribution that changes over time due to fusion processes, their spatial distribution gives rise to distinct patterns. Their occurrence depends on the concentration of proteins which are synthesized based on the transcriptional activities of some genes. Hence, differences in these spatio-temporal vesicle patterns allow indirect conclusions about the (unknown) impact of these genes. By means of agent-based computer simulations we are able to reproduce such patterns on real temporal and spatial scales. Our modeling approach is based on Brownian agents with an internal degree of freedom, Īø\theta, that represents the different modes of motion. Conditions inside the cell are modeled by an effective potential that differs for agents dependent on their value Īø\theta. Agent's motion in this effective potential is modeled by an overdampted Langevin equation, changes of Īø\theta are modeled as stochastic transitions with values obtained from experiments, and fusion events are modeled as space-dependent stochastic transitions. Our results for the spatio-temporal vesicle patterns can be used for a statistical comparison with experiments. We also derive hypotheses of how the silencing of some genes may affect the intracellular transport, and point to generalizations of the model

    A complementary view on the growth of directory trees

    Get PDF
    Trees are a special sub-class of networks with unique properties, such as the level distribution which has often been overlooked. We analyse a general tree growth model proposed by Klemm etal.[Phys. Rev. Lett. 95, 128701 (2005)] to explain the growth of user-generated directory structures in computers. The model has a single parameter q which interpolates between preferential attachment and random growth. Our analysis results in three contributions: first, we propose a more efficient estimation method for q based on the degree distribution, which is one specific representation of the model. Next, we introduce the concept of a level distribution and analytically solve the model for this representation. This allows for an alternative and independent measure of q. We argue that, to capture real growth processes, the q estimations from the degree and the level distributions should coincide. Thus, we finally apply both representations to validate the model with synthetically generated tree structures, as well as with collected data of user directories. In the case of real directory structures, we show that q measured from the level distribution are incompatible with q measured from the degree distribution. In contrast to this, we find perfect agreement in the case of simulated data. Thus, we conclude that the model is an incomplete description of the growth of real directory structures as it fails to reproduce the level distribution. This insight can be generalised to point out the importance of the level distribution for modeling tree growt

    A k-shell decomposition method for weighted networks

    Full text link
    We present a generalized method for calculating the k-shell structure of weighted networks. The method takes into account both the weight and the degree of a network, in such a way that in the absence of weights we resume the shell structure obtained by the classic k-shell decomposition. In the presence of weights, we show that the method is able to partition the network in a more refined way, without the need of any arbitrary threshold on the weight values. Furthermore, by simulating spreading processes using the susceptible-infectious-recovered model in four different weighted real-world networks, we show that the weighted k-shell decomposition method ranks the nodes more accurately, by placing nodes with higher spreading potential into shells closer to the core. In addition, we demonstrate our new method on a real economic network and show that the core calculated using the weighted k-shell method is more meaningful from an economic perspective when compared with the unweighted one.Comment: 17 pages, 6 figure

    How Damage Diversification Can Reduce Systemic Risk

    Full text link
    We consider the problem of risk diversification in complex networks. Nodes represent e.g. financial actors, whereas weighted links represent e.g. financial obligations (credits/debts). Each node has a risk to fail because of losses resulting from defaulting neighbors, which may lead to large failure cascades. Classical risk diversification strategies usually neglect network effects and therefore suggest that risk can be reduced if possible losses (i.e., exposures) are split among many neighbors (exposure diversification, ED). But from a complex networks perspective diversification implies higher connectivity of the system as a whole which can also lead to increasing failure risk of a node. To cope with this, we propose a different strategy (damage diversification, DD), i.e. the diversification of losses that are imposed on neighboring nodes as opposed to losses incurred by the node itself. Here, we quantify the potential of DD to reduce systemic risk in comparison to ED. For this, we develop a branching process approximation that we generalize to weighted networks with (almost) arbitrary degree and weight distributions. This allows us to identify systemically relevant nodes in a network even if their directed weights differ strongly. On the macro level, we provide an analytical expression for the average cascade size, to quantify systemic risk. Furthermore, on the meso level we calculate failure probabilities of nodes conditional on their system relevance

    Systemic Risk in a Unifying Framework for Cascading Processes on Networks

    Full text link
    We introduce a general framework for models of cascade and contagion processes on networks, to identify their commonalities and differences. In particular, models of social and financial cascades, as well as the fiber bundle model, the voter model, and models of epidemic spreading are recovered as special cases. To unify their description, we define the net fragility of a node, which is the difference between its fragility and the threshold that determines its failure. Nodes fail if their net fragility grows above zero and their failure increases the fragility of neighbouring nodes, thus possibly triggering a cascade. In this framework, we identify three classes depending on the way the fragility of a node is increased by the failure of a neighbour. At the microscopic level, we illustrate with specific examples how the failure spreading pattern varies with the node triggering the cascade, depending on its position in the network and its degree. At the macroscopic level, systemic risk is measured as the final fraction of failed nodes, Xāˆ—X^\ast, and for each of the three classes we derive a recursive equation to compute its value. The phase diagram of Xāˆ—X^\ast as a function of the initial conditions, thus allows for a prediction of the systemic risk as well as a comparison of the three different model classes. We could identify which model class lead to a first-order phase transition in systemic risk, i.e. situations where small changes in the initial conditions may lead to a global failure. Eventually, we generalize our framework to encompass stochastic contagion models. This indicates the potential for further generalizations.Comment: 43 pages, 16 multipart figure
    • ā€¦
    corecore