45,171 research outputs found

    Recovery rate affects the effective epidemic threshold with synchronous updating

    Get PDF
    Accurate identification of effective epidemic threshold is essential for understanding epidemic dynamics on complex networks. In this paper, we systematically study how the recovery rate affects the susceptible-infected-removed spreading dynamics on complex networks, where synchronous and asynchronous updating processes are taken into account. We derive the theoretical effective epidemic threshold and final outbreak size based on the edge-based compartmental theory. To validate the proposed theoretical predictions, extensive numerical experiments are implemented by using asynchronous and synchronous updating methods. When asynchronous updating method is used in simulations, recovery rate does not affect the final state of spreading dynamics. But with synchronous updating, we find that the effective epidemic threshold decreases with recovery rate, and final outbreak size increases with recovery rate. A good agreement between the theoretical predictions and the numerical results are observed on both synthetic and real-world networks. Our results extend the existing theoretical studies and help us to understand the phase transition with arbitrary recovery rate

    The Computational Cost of Asynchronous Neural Communication

    Get PDF
    Biological neural computation is inherently asynchronous due to large variations in neuronal spike timing and transmission delays. So-far, most theoretical work on neural networks assumes the synchronous setting where neurons fire simultaneously in discrete rounds. In this work we aim at understanding the barriers of asynchronous neural computation from an algorithmic perspective. We consider an extension of the widely studied model of synchronized spiking neurons [Maass, Neural Networks 97] to the asynchronous setting by taking into account edge and node delays. - Edge Delays: We define an asynchronous model for spiking neurons in which the latency values (i.e., transmission delays) of non self-loop edges vary adversarially over time. This extends the recent work of [Hitron and Parter, ESA\u2719] in which the latency values are restricted to be fixed over time. Our first contribution is an impossibility result that implies that the assumption that self-loop edges have no delays (as assumed in Hitron and Parter) is indeed necessary. Interestingly, in real biological networks self-loop edges (a.k.a. autapse) are indeed free of delays, and the latter has been noted by neuroscientists to be crucial for network synchronization. To capture the computational challenges in this setting, we first consider the implementation of a single NOT gate. This simple function already captures the fundamental difficulties in the asynchronous setting. Our key technical results are space and time upper and lower bounds for the NOT function, our time bounds are tight. In the spirit of the distributed synchronizers [Awerbuch and Peleg, FOCS\u2790] and following [Hitron and Parter, ESA\u2719], we then provide a general synchronizer machinery. Our construction is very modular and it is based on efficient circuit implementation of threshold gates. The complexity of our scheme is measured by the overhead in the number of neurons and the computation time, both are shown to be polynomial in the largest latency value, and the largest incoming degree ? of the original network. - Node Delays: We introduce the study of asynchronous communication due to variations in the response rates of the neurons in the network. In real brain networks, the round duration varies between different neurons in the network. Our key result is a simulation methodology that allows one to transform the above mentioned synchronized solution under edge delays into a synchronized under node delays while incurring a small overhead w.r.t space and time

    Distributed Change Detection via Average Consensus over Networks

    Full text link
    Distributed change-point detection has been a fundamental problem when performing real-time monitoring using sensor-networks. We propose a distributed detection algorithm, where each sensor only exchanges CUSUM statistic with their neighbors based on the average consensus scheme, and an alarm is raised when local consensus statistic exceeds a pre-specified global threshold. We provide theoretical performance bounds showing that the performance of the fully distributed scheme can match the centralized algorithms under some mild conditions. Numerical experiments demonstrate the good performance of the algorithm especially in detecting asynchronous changes.Comment: 15 pages, 8 figure

    Counting to Ten with Two Fingers: Compressed Counting with Spiking Neurons

    Get PDF
    We consider the task of measuring time with probabilistic threshold gates implemented by bio-inspired spiking neurons. In the model of spiking neural networks, network evolves in discrete rounds, where in each round, neurons fire in pulses in response to a sufficiently high membrane potential. This potential is induced by spikes from neighboring neurons that fired in the previous round, which can have either an excitatory or inhibitory effect. Discovering the underlying mechanisms by which the brain perceives the duration of time is one of the largest open enigma in computational neuro-science. To gain a better algorithmic understanding onto these processes, we introduce the neural timer problem. In this problem, one is given a time parameter t, an input neuron x, and an output neuron y. It is then required to design a minimum sized neural network (measured by the number of auxiliary neurons) in which every spike from x in a given round i, makes the output y fire for the subsequent t consecutive rounds. We first consider a deterministic implementation of a neural timer and show that Theta(log t) (deterministic) threshold gates are both sufficient and necessary. This raised the question of whether randomness can be leveraged to reduce the number of neurons. We answer this question in the affirmative by considering neural timers with spiking neurons where the neuron y is required to fire for t consecutive rounds with probability at least 1-delta, and should stop firing after at most 2t rounds with probability 1-delta for some input parameter delta in (0,1). Our key result is a construction of a neural timer with O(log log 1/delta) spiking neurons. Interestingly, this construction uses only one spiking neuron, while the remaining neurons can be deterministic threshold gates. We complement this construction with a matching lower bound of Omega(min{log log 1/delta, log t}) neurons. This provides the first separation between deterministic and randomized constructions in the setting of spiking neural networks. Finally, we demonstrate the usefulness of compressed counting networks for synchronizing neural networks. In the spirit of distributed synchronizers [Awerbuch-Peleg, FOCS\u2790], we provide a general transformation (or simulation) that can take any synchronized network solution and simulate it in an asynchronous setting (where edges have arbitrary response latencies) while incurring a small overhead w.r.t the number of neurons and computation time

    When do correlations increase with firing rates in recurrent networks?

    Get PDF
    A central question in neuroscience is to understand how noisy firing patterns are used to transmit information. Because neural spiking is noisy, spiking patterns are often quantified via pairwise correlations, or the probability that two cells will spike coincidentally, above and beyond their baseline firing rate. One observation frequently made in experiments, is that correlations can increase systematically with firing rate. Theoretical studies have determined that stimulus-dependent correlations that increase with firing rate can have beneficial effects on information coding; however, we still have an incomplete understanding of what circuit mechanisms do, or do not, produce this correlation-firing rate relationship. Here, we studied the relationship between pairwise correlations and firing rates in recurrently coupled excitatory-inhibitory spiking networks with conductance-based synapses. We found that with stronger excitatory coupling, a positive relationship emerged between pairwise correlations and firing rates. To explain these findings, we used linear response theory to predict the full correlation matrix and to decompose correlations in terms of graph motifs. We then used this decomposition to explain why covariation of correlations with firing rate—a relationship previously explained in feedforward networks driven by correlated input—emerges in some recurrent networks but not in others. Furthermore, when correlations covary with firing rate, this relationship is reflected in low-rank structure in the correlation matrix

    Asynchronous spiking neurons, the natural key to exploit temporal sparsity

    Get PDF
    Inference of Deep Neural Networks for stream signal (Video/Audio) processing in edge devices is still challenging. Unlike the most state of the art inference engines which are efficient for static signals, our brain is optimized for real-time dynamic signal processing. We believe one important feature of the brain (asynchronous state-full processing) is the key to its excellence in this domain. In this work, we show how asynchronous processing with state-full neurons allows exploitation of the existing sparsity in natural signals. This paper explains three different types of sparsity and proposes an inference algorithm which exploits all types of sparsities in the execution of already trained networks. Our experiments in three different applications (Handwritten digit recognition, Autonomous Steering and Hand-Gesture recognition) show that this model of inference reduces the number of required operations for sparse input data by a factor of one to two orders of magnitudes. Additionally, due to fully asynchronous processing this type of inference can be run on fully distributed and scalable neuromorphic hardware platforms
    • …
    corecore