926 research outputs found

    Recent Advances and Applications of Fractional-Order Neural Networks

    Get PDF
    This paper focuses on the growth, development, and future of various forms of fractional-order neural networks. Multiple advances in structure, learning algorithms, and methods have been critically investigated and summarized. This also includes the recent trends in the dynamics of various fractional-order neural networks. The multiple forms of fractional-order neural networks considered in this study are Hopfield, cellular, memristive, complex, and quaternion-valued based networks. Further, the application of fractional-order neural networks in various computational fields such as system identification, control, optimization, and stability have been critically analyzed and discussed

    Chaotic image encryption using hopfield and hindmarsh–rose neurons implemented on FPGA

    Get PDF
    Chaotic systems implemented by artificial neural networks are good candidates for data encryption. In this manner, this paper introduces the cryptographic application of the Hopfield and the Hindmarsh–Rose neurons. The contribution is focused on finding suitable coefficient values of the neurons to generate robust random binary sequences that can be used in image encryption. This task is performed by evaluating the bifurcation diagrams from which one chooses appropriate coefficient values of the mathematical models that produce high positive Lyapunov exponent and Kaplan–Yorke dimension values, which are computed using TISEAN. The randomness of both the Hopfield and the Hindmarsh–Rose neurons is evaluated from chaotic time series data by performing National Institute of Standard and Technology (NIST) tests. The implementation of both neurons is done using field-programmable gate arrays whose architectures are used to develop an encryption system for RGB images. The success of the encryption system is confirmed by performing correlation, histogram, variance, entropy, and Number of Pixel Change Rate (NPCR) tests

    Memory Capacity of a novel optical neural net architecture

    Get PDF
    A new associative memory neural network which can be constructed using optical matched filters is described. It has three layers, the centre one being iterative with its weights set prior to training. The other two layers are feedforward nets and the weights are set during training. The best choice of central layer weights, or in optical terms, of pairs of images associated in a hologram is considered. The stored images or codes are selected carefully form an orthogonal set using a novel algorithm. This enables the net to have a high memory capacity equal to half the umber of neurons with a low probability of error. 17-18th October 1989

    Neural networks and MIMD-multiprocessors

    Get PDF
    Two artificial neural network models are compared. They are the Hopfield Neural Network Model and the Sparse Distributed Memory model. Distributed algorithms for both of them are designed and implemented. The run time characteristics of the algorithms are analyzed theoretically and tested in practice. The storage capacities of the networks are compared. Implementations are done using a distributed multiprocessor system

    Correlating matched-filter model for analysis and optimisation of neural networks

    Get PDF
    A new formalism is described for modelling neural networks by means of which a clear physical understanding of the network behaviour can be gained. In essence, the neural net is represented by an equivalent network of matched filters which is then analysed by standard correlation techniques. The procedure is demonstrated on the synchronous Little-Hopfield network. It is shown how the ability of this network to discriminate between stored binary, bipolar codes is optimised if the stored codes are chosen to be orthogonal. However, such a choice will not often be possible and so a new neural network architecture is proposed which enables the same discrimination to be obtained for arbitrary stored codes. The most efficient convergence of the synchronous Little-Hopfield net is obtained when the neurons are connected to themselves with a weight equal to the number of stored codes. The processing gain is presented for this case. The paper goes on to show how this modelling technique can be extended to analyse the behaviour of both hard and soft neural threshold responses and a novel time-dependent threshold response is described

    Correlating matched-filter model for analysis and optimisation of neural networks

    Get PDF
    A new formalism is described for modelling neural networks by means of which a clear physical understanding of the network behaviour can be gained. In essence, the neural net is represented by an equivalent network of matched filters which is then analysed by standard correlation techniques. The procedure is demonstrated on the synchronous Little-Hopfield network. It is shown how the ability of this network to discriminate between stored binary, bipolar codes is optimised if the stored codes are chosen to be orthogonal. However, such a choice will not often be possible and so a new neural network architecture is proposed which enables the same discrimination to be obtained for arbitrary stored codes. The most efficient convergence of the synchronous Little-Hopfield net is obtained when the neurons are connected to themselves with a weight equal to the number of stored codes. The processing gain is presented for this case. The paper goes on to show how this modelling technique can be extended to analyse the behaviour of both hard and soft neural threshold responses and a novel time-dependent threshold response is described

    Computational mechanisms in genetic regulation by RNA

    Full text link
    The evolution of the genome has led to very sophisticated and complex regulation. Because of the abundance of non-coding RNA (ncRNA) in the cell, different species will promiscuously associate with each other, suggesting collective dynamics similar to artificial neural networks. Here we present a simple mechanism allowing ncRNA to perform computations equivalent to neural network algorithms such as Boltzmann machines and the Hopfield model. The quantities analogous to the neural couplings are the equilibrium constants between different RNA species. The relatively rapid equilibration of RNA binding and unbinding is regulated by a slower process that degrades and creates new RNA. The model requires that the creation rate for each species be an increasing function of the ratio of total to unbound RNA. Similar mechanisms have already been found to exist experimentally for ncRNA regulation. With the overall concentration of RNA regulated, equilibrium constants can be chosen to store many different patterns, or many different input-output relations. The network is also quite insensitive to random mutations in equilibrium constants. Therefore one expects that this kind of mechanism will have a much higher mutation rate than ones typically regarded as being under evolutionary constraint.Comment: 18 pages, 10 figure
    • 

    corecore