2,996 research outputs found

    Delay-induced multiple stochastic resonances on scale-free neuronal networks

    Full text link
    We study the effects of periodic subthreshold pacemaker activity and time-delayed coupling on stochastic resonance over scale-free neuronal networks. As the two extreme options, we introduce the pacemaker respectively to the neuron with the highest degree and to one of the neurons with the lowest degree within the network, but we also consider the case when all neurons are exposed to the periodic forcing. In the absence of delay, we show that an intermediate intensity of noise is able to optimally assist the pacemaker in imposing its rhythm on the whole ensemble, irrespective to its placing, thus providing evidences for stochastic resonance on the scale-free neuronal networks. Interestingly thereby, if the forcing in form of a periodic pulse train is introduced to all neurons forming the network, the stochastic resonance decreases as compared to the case when only a single neuron is paced. Moreover, we show that finite delays in coupling can significantly affect the stochastic resonance on scale-free neuronal networks. In particular, appropriately tuned delays can induce multiple stochastic resonances independently of the placing of the pacemaker, but they can also altogether destroy stochastic resonance. Delay-induced multiple stochastic resonances manifest as well-expressed maxima of the correlation measure, appearing at every multiple of the pacemaker period. We argue that fine-tuned delays and locally active pacemakers are vital for assuring optimal conditions for stochastic resonance on complex neuronal networks.Comment: 7 two-column pages, 5 figures; accepted for publication in Chao

    Cluster update and recognition

    Full text link
    We present a fast and robust cluster update algorithm that is especially efficient in implementing the task of image segmentation using the method of superparamagnetic clustering. We apply it to a Potts model with spin interactions that are are defined by gray-scale differences within the image. Motivated by biological systems, we introduce the concept of neural inhibition to the Potts model realization of the segmentation problem. Including the inhibition term in the Hamiltonian results in enhanced contrast and thereby significantly improves segmentation quality. As a second benefit we can - after equilibration - directly identify the image segments as the clusters formed by the clustering algorithm. To construct a new spin configuration the algorithm performs the standard steps of (1) forming clusters and of (2) updating the spins in a cluster simultaneously. As opposed to standard algorithms, however, we share the interaction energy between the two steps. Thus the update probabilities are not independent of the interaction energies. As a consequence, we observe an acceleration of the relaxation by a factor of 10 compared to the Swendson and Wang procedure.Comment: 4 pages, 2 figure

    Cluster Algorithm for a Solid-On-Solid Model with Constraints

    Full text link
    We adapt the VMR (valleys-to-mountains reflections) algorithm, originally devised by us for simulations of SOS models, to the BCSOS model. It is the first time that a cluster algorithm is used for a model with constraints. The performance of this new algorithm is studied in detail in both phases of the model, including a finite size scaling analysis of the autocorrelations.Comment: 10 pages, 3 figures appended as ps-file

    Dynamic behaviors in directed networks

    Full text link
    Motivated by the abundance of directed synaptic couplings in a real biological neuronal network, we investigate the synchronization behavior of the Hodgkin-Huxley model in a directed network. We start from the standard model of the Watts-Strogatz undirected network and then change undirected edges to directed arcs with a given probability, still preserving the connectivity of the network. A generalized clustering coefficient for directed networks is defined and used to investigate the interplay between the synchronization behavior and underlying structural properties of directed networks. We observe that the directedness of complex networks plays an important role in emerging dynamical behaviors, which is also confirmed by a numerical study of the sociological game theoretic voter model on directed networks

    Loop algorithms for quantum simulations of fermion models on lattices

    Full text link
    Two cluster algorithms, based on constructing and flipping loops, are presented for worldline quantum Monte Carlo simulations of fermions and are tested on the one-dimensional repulsive Hubbard model. We call these algorithms the loop-flip and loop-exchange algorithms. For these two algorithms and the standard worldline algorithm, we calculated the autocorrelation times for various physical quantities and found that the ordinary worldline algorithm, which uses only local moves, suffers from very long correlation times that makes not only the estimate of the error difficult but also the estimate of the average values themselves difficult. These difficulties are especially severe in the low-temperature, large-UU regime. In contrast, we find that new algorithms, when used alone or in combinations with themselves and the standard algorithm, can have significantly smaller autocorrelation times, in some cases being smaller by three orders of magnitude. The new algorithms, which use non-local moves, are discussed from the point of view of a general prescription for developing cluster algorithms. The loop-flip algorithm is also shown to be ergodic and to belong to the grand canonical ensemble. Extensions to other models and higher dimensions is briefly discussed.Comment: 36 pages, RevTex ver.

    Breaking quantum linearity: constraints from human perception and cosmological implications

    Full text link
    Resolving the tension between quantum superpositions and the uniqueness of the classical world is a major open problem. One possibility, which is extensively explored both theoretically and experimentally, is that quantum linearity breaks above a given scale. Theoretically, this possibility is predicted by collapse models. They provide quantitative information on where violations of the superposition principle become manifest. Here we show that the lower bound on the collapse parameter lambda, coming from the analysis of the human visual process, is ~ 7 +/- 2 orders of magnitude stronger than the original bound, in agreement with more recent analysis. This implies that the collapse becomes effective with systems containing ~ 10^4 - 10^5 nucleons, and thus falls within the range of testability with present-day technology. We also compare the spectrum of the collapsing field with those of known cosmological fields, showing that a typical cosmological random field can yield an efficient wave function collapse.Comment: 13 pages, LaTeX, 3 figure

    Adaptive self-organization in a realistic neural network model

    Full text link
    Information processing in complex systems is often found to be maximally efficient close to critical states associated with phase transitions. It is therefore conceivable that also neural information processing operates close to criticality. This is further supported by the observation of power-law distributions, which are a hallmark of phase transitions. An important open question is how neural networks could remain close to a critical point while undergoing a continual change in the course of development, adaptation, learning, and more. An influential contribution was made by Bornholdt and Rohlf, introducing a generic mechanism of robust self-organized criticality in adaptive networks. Here, we address the question whether this mechanism is relevant for real neural networks. We show in a realistic model that spike-time-dependent synaptic plasticity can self-organize neural networks robustly toward criticality. Our model reproduces several empirical observations and makes testable predictions on the distribution of synaptic strength, relating them to the critical state of the network. These results suggest that the interplay between dynamics and topology may be essential for neural information processing.Comment: 6 pages, 4 figure
    corecore