784 research outputs found

    Chaotic image encryption using hopfield and hindmarsh–rose neurons implemented on FPGA

    Get PDF
    Chaotic systems implemented by artificial neural networks are good candidates for data encryption. In this manner, this paper introduces the cryptographic application of the Hopfield and the Hindmarsh–Rose neurons. The contribution is focused on finding suitable coefficient values of the neurons to generate robust random binary sequences that can be used in image encryption. This task is performed by evaluating the bifurcation diagrams from which one chooses appropriate coefficient values of the mathematical models that produce high positive Lyapunov exponent and Kaplan–Yorke dimension values, which are computed using TISEAN. The randomness of both the Hopfield and the Hindmarsh–Rose neurons is evaluated from chaotic time series data by performing National Institute of Standard and Technology (NIST) tests. The implementation of both neurons is done using field-programmable gate arrays whose architectures are used to develop an encryption system for RGB images. The success of the encryption system is confirmed by performing correlation, histogram, variance, entropy, and Number of Pixel Change Rate (NPCR) tests

    Unstable Dynamics, Nonequilibrium Phases and Criticality in Networked Excitable Media

    Full text link
    Here we numerically study a model of excitable media, namely, a network with occasionally quiet nodes and connection weights that vary with activity on a short-time scale. Even in the absence of stimuli, this exhibits unstable dynamics, nonequilibrium phases -including one in which the global activity wanders irregularly among attractors- and 1/f noise while the system falls into the most irregular behavior. A net result is resilience which results in an efficient search in the model attractors space that can explain the origin of certain phenomenology in neural, genetic and ill-condensed matter systems. By extensive computer simulation we also address a relation previously conjectured between observed power-law distributions and the occurrence of a "critical state" during functionality of (e.g.) cortical networks, and describe the precise nature of such criticality in the model.Comment: 18 pages, 9 figure

    Bifurcation analysis in an associative memory model

    Full text link
    We previously reported the chaos induced by the frustration of interaction in a non-monotonic sequential associative memory model, and showed the chaotic behaviors at absolute zero. We have now analyzed bifurcation in a stochastic system, namely a finite-temperature model of the non-monotonic sequential associative memory model. We derived order-parameter equations from the stochastic microscopic equations. Two-parameter bifurcation diagrams obtained from those equations show the coexistence of attractors, which do not appear at absolute zero, and the disappearance of chaos due to the temperature effect.Comment: 19 page

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and FundaciĂłn BBVA

    Hopf Bifurcation and Chaos in a Single Inertial Neuron Model with Time Delay

    Full text link
    A delayed differential equation modelling a single neuron with inertial term is considered in this paper. Hopf bifurcation is studied by using the normal form theory of retarded functional differential equations. When adopting a nonmonotonic activation function, chaotic behavior is observed. Phase plots, waveform plots, and power spectra are presented to confirm the chaoticity.Comment: 12 pages, 7 figure

    Complexity without chaos: Plasticity within random recurrent networks generates robust timing and motor control

    Get PDF
    It is widely accepted that the complex dynamics characteristic of recurrent neural circuits contributes in a fundamental manner to brain function. Progress has been slow in understanding and exploiting the computational power of recurrent dynamics for two main reasons: nonlinear recurrent networks often exhibit chaotic behavior and most known learning rules do not work in robust fashion in recurrent networks. Here we address both these problems by demonstrating how random recurrent networks (RRN) that initially exhibit chaotic dynamics can be tuned through a supervised learning rule to generate locally stable neural patterns of activity that are both complex and robust to noise. The outcome is a novel neural network regime that exhibits both transiently stable and chaotic trajectories. We further show that the recurrent learning rule dramatically increases the ability of RRNs to generate complex spatiotemporal motor patterns, and accounts for recent experimental data showing a decrease in neural variability in response to stimulus onset
    • …
    corecore