1,378 research outputs found

    A Broad Class of Discrete-Time Hypercomplex-Valued Hopfield Neural Networks

    Full text link
    In this paper, we address the stability of a broad class of discrete-time hypercomplex-valued Hopfield-type neural networks. To ensure the neural networks belonging to this class always settle down at a stationary state, we introduce novel hypercomplex number systems referred to as real-part associative hypercomplex number systems. Real-part associative hypercomplex number systems generalize the well-known Cayley-Dickson algebras and real Clifford algebras and include the systems of real numbers, complex numbers, dual numbers, hyperbolic numbers, quaternions, tessarines, and octonions as particular instances. Apart from the novel hypercomplex number systems, we introduce a family of hypercomplex-valued activation functions called B\mathcal{B}-projection functions. Broadly speaking, a B\mathcal{B}-projection function projects the activation potential onto the set of all possible states of a hypercomplex-valued neuron. Using the theory presented in this paper, we confirm the stability analysis of several discrete-time hypercomplex-valued Hopfield-type neural networks from the literature. Moreover, we introduce and provide the stability analysis of a general class of Hopfield-type neural networks on Cayley-Dickson algebras

    Chaotic image encryption using hopfield and hindmarsh–rose neurons implemented on FPGA

    Get PDF
    Chaotic systems implemented by artificial neural networks are good candidates for data encryption. In this manner, this paper introduces the cryptographic application of the Hopfield and the Hindmarsh–Rose neurons. The contribution is focused on finding suitable coefficient values of the neurons to generate robust random binary sequences that can be used in image encryption. This task is performed by evaluating the bifurcation diagrams from which one chooses appropriate coefficient values of the mathematical models that produce high positive Lyapunov exponent and Kaplan–Yorke dimension values, which are computed using TISEAN. The randomness of both the Hopfield and the Hindmarsh–Rose neurons is evaluated from chaotic time series data by performing National Institute of Standard and Technology (NIST) tests. The implementation of both neurons is done using field-programmable gate arrays whose architectures are used to develop an encryption system for RGB images. The success of the encryption system is confirmed by performing correlation, histogram, variance, entropy, and Number of Pixel Change Rate (NPCR) tests

    Memory Capacity of a novel optical neural net architecture

    Get PDF
    A new associative memory neural network which can be constructed using optical matched filters is described. It has three layers, the centre one being iterative with its weights set prior to training. The other two layers are feedforward nets and the weights are set during training. The best choice of central layer weights, or in optical terms, of pairs of images associated in a hologram is considered. The stored images or codes are selected carefully form an orthogonal set using a novel algorithm. This enables the net to have a high memory capacity equal to half the umber of neurons with a low probability of error. 17-18th October 1989

    Memory in reservoirs for high dimensional input

    Get PDF
    Reservoir Computing (RC) is a recently introduced scheme to employ recurrent neural networks while circumventing the difficulties that typically appear when training the recurrent weights. The ‘reservoir’ is a fixed randomly initiated recurrent network which receives input via a random mapping. Only an instantaneous linear mapping from the network to the output is trained which can be done with linear regression. In this paper we study dynamical properties of reservoirs receiving a high number of inputs. More specifically, we investigate how the internal state of the network retains fading memory of its input signal. Memory properties for random recurrent networks have been thoroughly examined in past research, but only for one-dimensional input. Here we take into account statistics which will typically occur in high dimensional signals. We find useful empirical data which expresses how memory in recurrent networks is distributed over the individual principal components of the input

    Multitasking associative networks

    Full text link
    We introduce a bipartite, diluted and frustrated, network as a sparse restricted Boltzman machine and we show its thermodynamical equivalence to an associative working memory able to retrieve multiple patterns in parallel without falling into spurious states typical of classical neural networks. We focus on systems processing in parallel a finite (up to logarithmic growth in the volume) amount of patterns, mirroring the low-level storage of standard Amit-Gutfreund-Sompolinsky theory. Results obtained trough statistical mechanics, signal-to-noise technique and Monte Carlo simulations are overall in perfect agreement and carry interesting biological insights. Indeed, these associative networks pave new perspectives in the understanding of multitasking features expressed by complex systems, e.g. neural and immune networks.Comment: to appear on Phys.Rev.Let
    corecore