1,361 research outputs found

    Experiments with a Malkus-Lorenz water wheel: Chaos and Synchronization

    Full text link
    We describe a simple experimental implementation of the Malkus-Lorenz water wheel. We demonstrate that both chaotic and periodic behavior is found as wheel parameters are changed in agreement with predictions from the Lorenz model. We furthermore show that when the measured angular velocity of our water wheel is used as an input signal to a computer model implementing the Lorenz equations, high quality chaos synchronization of the model and the water wheel is achieved. This indicates that the Lorenz equations provide a good description of the water wheel dynamics.Comment: 12 pages, 7 figures. The following article has been accepted by the American Journal of Physics. After it is published, it will be found at http://scitation.aip.org/ajp

    Scaling of a large-scale simulation of synchronous slow-wave and asynchronous awake-like activity of a cortical model with long-range interconnections

    Full text link
    Cortical synapse organization supports a range of dynamic states on multiple spatial and temporal scales, from synchronous slow wave activity (SWA), characteristic of deep sleep or anesthesia, to fluctuating, asynchronous activity during wakefulness (AW). Such dynamic diversity poses a challenge for producing efficient large-scale simulations that embody realistic metaphors of short- and long-range synaptic connectivity. In fact, during SWA and AW different spatial extents of the cortical tissue are active in a given timespan and at different firing rates, which implies a wide variety of loads of local computation and communication. A balanced evaluation of simulation performance and robustness should therefore include tests of a variety of cortical dynamic states. Here, we demonstrate performance scaling of our proprietary Distributed and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and AW for bidimensional grids of neural populations, which reflects the modular organization of the cortex. We explored networks up to 192x192 modules, each composed of 1250 integrate-and-fire neurons with spike-frequency adaptation, and exponentially decaying inter-modular synaptic connectivity with varying spatial decay constant. For the largest networks the total number of synapses was over 70 billion. The execution platform included up to 64 dual-socket nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40GHz clock rates. Network initialization time, memory usage, and execution time showed good scaling performances from 1 to 1024 processes, implemented using the standard Message Passing Interface (MPI) protocol. We achieved simulation speeds of between 2.3x10^9 and 4.1x10^9 synaptic events per second for both cortical states in the explored range of inter-modular interconnections.Comment: 22 pages, 9 figures, 4 table

    Scaling of a large-scale simulation of synchronous slow-wave and asynchronous awake-like activity of a cortical model with long-range interconnections

    Full text link
    Cortical synapse organization supports a range of dynamic states on multiple spatial and temporal scales, from synchronous slow wave activity (SWA), characteristic of deep sleep or anesthesia, to fluctuating, asynchronous activity during wakefulness (AW). Such dynamic diversity poses a challenge for producing efficient large-scale simulations that embody realistic metaphors of short- and long-range synaptic connectivity. In fact, during SWA and AW different spatial extents of the cortical tissue are active in a given timespan and at different firing rates, which implies a wide variety of loads of local computation and communication. A balanced evaluation of simulation performance and robustness should therefore include tests of a variety of cortical dynamic states. Here, we demonstrate performance scaling of our proprietary Distributed and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and AW for bidimensional grids of neural populations, which reflects the modular organization of the cortex. We explored networks up to 192x192 modules, each composed of 1250 integrate-and-fire neurons with spike-frequency adaptation, and exponentially decaying inter-modular synaptic connectivity with varying spatial decay constant. For the largest networks the total number of synapses was over 70 billion. The execution platform included up to 64 dual-socket nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40GHz clock rates. Network initialization time, memory usage, and execution time showed good scaling performances from 1 to 1024 processes, implemented using the standard Message Passing Interface (MPI) protocol. We achieved simulation speeds of between 2.3x10^9 and 4.1x10^9 synaptic events per second for both cortical states in the explored range of inter-modular interconnections.Comment: 22 pages, 9 figures, 4 table

    Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective

    Get PDF
    On metrics of density and power efficiency, neuromorphic technologies have the potential to surpass mainstream computing technologies in tasks where real-time functionality, adaptability, and autonomy are essential. While algorithmic advances in neuromorphic computing are proceeding successfully, the potential of memristors to improve neuromorphic computing have not yet born fruit, primarily because they are often used as a drop-in replacement to conventional memory. However, interdisciplinary approaches anchored in machine learning theory suggest that multifactor plasticity rules matching neural and synaptic dynamics to the device capabilities can take better advantage of memristor dynamics and its stochasticity. Furthermore, such plasticity rules generally show much higher performance than that of classical Spike Time Dependent Plasticity (STDP) rules. This chapter reviews the recent development in learning with spiking neural network models and their possible implementation with memristor-based hardware

    Predictability: a way to characterize Complexity

    Full text link
    Different aspects of the predictability problem in dynamical systems are reviewed. The deep relation among Lyapunov exponents, Kolmogorov-Sinai entropy, Shannon entropy and algorithmic complexity is discussed. In particular, we emphasize how a characterization of the unpredictability of a system gives a measure of its complexity. Adopting this point of view, we review some developments in the characterization of the predictability of systems showing different kind of complexity: from low-dimensional systems to high-dimensional ones with spatio-temporal chaos and to fully developed turbulence. A special attention is devoted to finite-time and finite-resolution effects on predictability, which can be accounted with suitable generalization of the standard indicators. The problems involved in systems with intrinsic randomness is discussed, with emphasis on the important problems of distinguishing chaos from noise and of modeling the system. The characterization of irregular behavior in systems with discrete phase space is also considered.Comment: 142 Latex pgs. 41 included eps figures, submitted to Physics Reports. Related information at this http://axtnt2.phys.uniroma1.i

    Practical implementation of nonlinear time series methods: The TISEAN package

    Full text link
    Nonlinear time series analysis is becoming a more and more reliable tool for the study of complicated dynamics from measurements. The concept of low-dimensional chaos has proven to be fruitful in the understanding of many complex phenomena despite the fact that very few natural systems have actually been found to be low dimensional deterministic in the sense of the theory. In order to evaluate the long term usefulness of the nonlinear time series approach as inspired by chaos theory, it will be important that the corresponding methods become more widely accessible. This paper, while not a proper review on nonlinear time series analysis, tries to make a contribution to this process by describing the actual implementation of the algorithms, and their proper usage. Most of the methods require the choice of certain parameters for each specific time series application. We will try to give guidance in this respect. The scope and selection of topics in this article, as well as the implementational choices that have been made, correspond to the contents of the software package TISEAN which is publicly available from http://www.mpipks-dresden.mpg.de/~tisean . In fact, this paper can be seen as an extended manual for the TISEAN programs. It fills the gap between the technical documentation and the existing literature, providing the necessary entry points for a more thorough study of the theoretical background.Comment: 27 pages, 21 figures, downloadable software at http://www.mpipks-dresden.mpg.de/~tisea

    Applications of dynamical systems in ecology

    Get PDF
    This thesis consists of five original pieces of work contained in chapters 2, 4, 6, 7 and 8. These cover four topics within the subject area of theoretical ecology: epidemiology, chaos in ecology, evolution and spatially extended ecological systems. Chapter 2 puts forward a new mechanism for producing chaos in ecology. We show that near extinctions in the SEIR model stabilise a chaotic repeller. This mechanism works for a wide-range of parameter values and so resolves the debate about which dynamic regime is associated with realistic values. It also highlights the problem of treating fluctuations as being either deterministically or stochastically produced. Chapter 4 describes a new technique for identifying chaos based on measuring the divergence of trajectories over a range of spatial scales. It correctly identifies noise scales and chaos in model systems and is also applied to some real ecological data sets. In chapters 4 and 5 we set evolutionary game theory in a nonlinear dynamical framework. We introduce a powerful new tool, the selective pressure, for analysing ecological models and identifying evolutionary stable states. It allows analysis of systems where complex attractors exist. We also study the evolution of phenotypic distributions and provide a new mechanism for evolutionary discontinuities. In chapter 6 we look at an individually-based spatially extended system. This model is spatially heterogeneous and stochastic. However we show that the dynamics on a certain scale are deterministic and low-dimensional. We show how to identify the most efficient spatial scale at which to monitor the system

    Front propagation into unstable states

    Get PDF
    This paper is an introductory review of the problem of front propagation into unstable states. Our presentation is centered around the concept of the asymptotic linear spreading velocity v*, the asymptotic rate with which initially localized perturbations spread into an unstable state according to the linear dynamical equations obtained by linearizing the fully nonlinear equations about the unstable state. This allows us to give a precise definition of pulled fronts, nonlinear fronts whose asymptotic propagation speed equals v*, and pushed fronts, nonlinear fronts whose asymptotic speed v^dagger is larger than v*. In addition, this approach allows us to clarify many aspects of the front selection problem, the question whether for a given dynamical equation the front is pulled or pushed. It also is the basis for the universal expressions for the power law rate of approach of the transient velocity v(t) of a pulled front as it converges toward its asymptotic value v*. Almost half of the paper is devoted to reviewing many experimental and theoretical examples of front propagation into unstable states from this unified perspective. The paper also includes short sections on the derivation of the universal power law relaxation behavior of v(t), on the absence of a moving boundary approximation for pulled fronts, on the relation between so-called global modes and front propagation, and on stochastic fronts.Comment: final version with some added references; a single pdf file of the published version is available at http://www.lorentz.leidenuniv.nl/~saarloo

    Topological Effects of Synaptic Time Dependent Plasticity

    Full text link
    We show that the local Spike Timing-Dependent Plasticity (STDP) rule has the effect of regulating the trans-synaptic weights of loops of any length within a simulated network of neurons. We show that depending on STDP's polarity, functional loops are formed or eliminated in networks driven to normal spiking conditions by random, partially correlated inputs, where functional loops comprise weights that exceed a non-zero threshold. We further prove that STDP is a form of loop-regulating plasticity for the case of a linear network comprising random weights drawn from certain distributions. Thus a notable local synaptic learning rule makes a specific prediction about synapses in the brain in which standard STDP is present: that under normal spiking conditions, they should participate in predominantly feed-forward connections at all scales. Our model implies that any deviations from this prediction would require a substantial modification to the hypothesized role for standard STDP. Given its widespread occurrence in the brain, we predict that STDP could also regulate long range synaptic loops among individual neurons across all brain scales, up to, and including, the scale of global brain network topology.Comment: 26 pages, 5 figure
    • …
    corecore