1,696 research outputs found

    Neural Networks With Asynchronous Control.

    Get PDF
    Neural network studies have previously focused on monolithic structures. The brain has a bicameral nature, however, and so it is natural to expect that bicameral structures will perform better. This dissertation offers an approach to the development of such bicameral structures. The companion neural structure takes advantage of the global and subset characteristics of the stored memories. Specifically we propose the use of an asynchronous controller C that implies the following update of a probe vector x by the connection matrix T: x\sp\prime = sgn (C(x, TX)). For a VLSI-implemented neural network the controller block can be easily placed in the feedback loop. In a network running asynchronously, the updating of the probe generally offers a choice among several components. If the right components are not updated the network may converge to an incorrect stable point. The proposed asynchronous controller together with the basic neural net forms a bicameral network that can be programmed in various ways to exploit global and local characteristics of stored memory. Several methods to do this are proposed. In one of the methods the update choices are based on bit frequencies. In another method handles are appended to the memories to improve retrieval. The new methods have been analyzed and their performance studies it is shown that there is a marked improvement in performance. This is illustrated by means of simulations. The use of an asynchronous controller allows the implementation of conditional rules that occur frequently in AI applications. It is shown that a neural network that uses conditional rules can solve problems in natural language understanding. The introduction of the asynchronous controller may be viewed as a first step in the development of truly bicameral structures that may be seen as the next generation of neural computers

    A Broad Class of Discrete-Time Hypercomplex-Valued Hopfield Neural Networks

    Full text link
    In this paper, we address the stability of a broad class of discrete-time hypercomplex-valued Hopfield-type neural networks. To ensure the neural networks belonging to this class always settle down at a stationary state, we introduce novel hypercomplex number systems referred to as real-part associative hypercomplex number systems. Real-part associative hypercomplex number systems generalize the well-known Cayley-Dickson algebras and real Clifford algebras and include the systems of real numbers, complex numbers, dual numbers, hyperbolic numbers, quaternions, tessarines, and octonions as particular instances. Apart from the novel hypercomplex number systems, we introduce a family of hypercomplex-valued activation functions called B\mathcal{B}-projection functions. Broadly speaking, a B\mathcal{B}-projection function projects the activation potential onto the set of all possible states of a hypercomplex-valued neuron. Using the theory presented in this paper, we confirm the stability analysis of several discrete-time hypercomplex-valued Hopfield-type neural networks from the literature. Moreover, we introduce and provide the stability analysis of a general class of Hopfield-type neural networks on Cayley-Dickson algebras

    Second Order Neural Networks.

    Get PDF
    In this dissertation, a feedback neural network model has been proposed. This network uses a second order method of convergence based on the Newton-Raphson method. This neural network has both discrete as well as continuous versions. When used as an associative memory, the proposed model has been called the polynomial neural network (PNN). The memories of this network can be located anywhere in an n dimensional space rather than being confined to the corners of the latter. A method for storing memories has been proposed. This is a single step method unlike the currently known computationally intensive iterative methods. An energy function for the polynomial neural network has been suggested. Issues relating to the error-correcting ability of this network have been addressed. Additionally, it has been found that the attractor basins of the memories of this network reveal a curious fractal topology, thereby suggesting a highly complex and often unpredictable nature. The use of the second order neural network as a function optimizer has also been shown. While issues relating to the hardware realization of this network have only been addressed briefly, it has been indicated that such a network would have a large amount of hardware for its realization. This problem can be obviated by using a simplified model that has also been described. The performance of this simplified model is comparable to that of the basic model while requiring much less hardware for its realization

    Enhancing associative memory recall and storage capacity using confocal cavity QED

    Get PDF
    Funding: Y.G. and B.M. acknowledgefunding from the Stanford Q-FARM Graduate Student Fellowship and the NSF Graduate Research Fellowship, respectively. J.K. acknowledges support from the Leverhulme Trust (IAF-2014-025), and S.G. acknowledges funding from the James S. McDonnell and Simons Foundations and an NSF Career Award.We introduce a near-term experimental platform for realizing an associative memory. It can simultaneously store many memories by using spinful bosons coupled to a degenerate multimode optical cavity. The associative memory is realized by a confocal cavity QED neural network, with the modes serving as the synapses, connecting a network of superradiant atomic spin ensembles,which serve as the neurons. Memories are encoded in the connectivity matrix between the spins and can be accessed through the input and output of patterns of light. Each aspect of the scheme is based on recently demonstrated technology using a confocal cavity and Bose-condensed atoms. Our scheme has two conceptually novel elements. First, it introduces a new form of random spin system that interpolates between a ferromagnetic and a spin glass regime as a physical parameter is tuned—the positions of ensembles within the cavity. Second, and more importantly, the spins relax via deterministic steepest-descent dynamics rather than Glauber dynamics. We show that this nonequilibrium quantum-optical scheme has significant advantages for associative memory over Glauber dynamics: These dynamics can enhance the network’s ability to store and recall memories beyond that of the standard Hopfield model. Surprisingly, the cavity QED dynamics can retrieve memories even when the system is in the spin glass phase. Thus, the experimental platform provides a novel physical instantiation of associative memories and spin glasses as well as provides an unusual form of relaxational dynamics that is conducive to memory recall even in regimes where it was thought to be impossible.Publisher PDFPeer reviewe

    Improving Associative Memory in a Network of Spiking Neurons

    Get PDF
    In this thesis we use computational neural network models to examine the dynamics and functionality of the CA3 region of the mammalian hippocampus. The emphasis of the project is to investigate how the dynamic control structures provided by inhibitory circuitry and cellular modification may effect the CA3 region during the recall of previously stored information. The CA3 region is commonly thought to work as a recurrent auto-associative neural network due to the neurophysiological characteristics found, such as, recurrent collaterals, strong and sparse synapses from external inputs and plasticity between coactive cells. Associative memory models have been developed using various configurations of mathematical artificial neural networks which were first developed over 40 years ago. Within these models we can store information via changes in the strength of connections between simplified model neurons (two-state). These memories can be recalled when a cue (noisy or partial) is instantiated upon the net. The type of information they can store is quite limited due to restrictions caused by the simplicity of the hard-limiting nodes which are commonly associated with a binary activation threshold. We build a much more biologically plausible model with complex spiking cell models and with realistic synaptic properties between cells. This model is based upon some of the many details we now know of the neuronal circuitry of the CA3 region. We implemented the model in computer software using Neuron and Matlab and tested it by running simulations of storage and recall in the network. By building this model we gain new insights into how different types of neurons, and the complex circuits they form, actually work. The mammalian brain consists of complex resistive-capacative electrical circuitry which is formed by the interconnection of large numbers of neurons. A principal cell type is the pyramidal cell within the cortex, which is the main information processor in our neural networks. Pyramidal cells are surrounded by diverse populations of interneurons which have proportionally smaller numbers compared to the pyramidal cells and these form connections with pyramidal cells and other inhibitory cells. By building detailed computational models of recurrent neural circuitry we explore how these microcircuits of interneurons control the flow of information through pyramidal cells and regulate the efficacy of the network. We also explore the effect of cellular modification due to neuronal activity and the effect of incorporating spatially dependent connectivity on the network during recall of previously stored information. In particular we implement a spiking neural network proposed by Sommer and Wennekers (2001). We consider methods for improving associative memory recall using methods inspired by the work by Graham and Willshaw (1995) where they apply mathematical transforms to an artificial neural network to improve the recall quality within the network. The networks tested contain either 100 or 1000 pyramidal cells with 10% connectivity applied and a partial cue instantiated, and with a global pseudo-inhibition.We investigate three methods. Firstly, applying localised disynaptic inhibition which will proportionalise the excitatory post synaptic potentials and provide a fast acting reversal potential which should help to reduce the variability in signal propagation between cells and provide further inhibition to help synchronise the network activity. Secondly, implementing a persistent sodium channel to the cell body which will act to non-linearise the activation threshold where after a given membrane potential the amplitude of the excitatory postsynaptic potential (EPSP) is boosted to push cells which receive slightly more excitation (most likely high units) over the firing threshold. Finally, implementing spatial characteristics of the dendritic tree will allow a greater probability of a modified synapse existing after 10% random connectivity has been applied throughout the network. We apply spatial characteristics by scaling the conductance weights of excitatory synapses which simulate the loss in potential in synapses found in the outer dendritic regions due to increased resistance. To further increase the biological plausibility of the network we remove the pseudo-inhibition and apply realistic basket cell models with differing configurations for a global inhibitory circuit. The networks are configured with; 1 single basket cell providing feedback inhibition, 10% basket cells providing feedback inhibition where 10 pyramidal cells connect to each basket cell and finally, 100% basket cells providing feedback inhibition. These networks are compared and contrasted for efficacy on recall quality and the effect on the network behaviour. We have found promising results from applying biologically plausible recall strategies and network configurations which suggests the role of inhibition and cellular dynamics are pivotal in learning and memory

    New Learning and Control Algorithms for Neural Networks.

    Get PDF
    Neural networks offer distributed processing power, error correcting capability and structural simplicity of the basic computing element. Neural networks have been found to be attractive for applications such as associative memory, robotics, image processing, speech understanding and optimization. Neural networks are self-adaptive systems that try to configure themselves to store new information. This dissertation investigates two approaches to improve performance: better learning and supervisory control. A new learning algorithm called the Correlation Continuous Unlearning (CCU) algorithm is presented. It is based on the idea of removing undesirable information that is encountered during the learning period. The control methods proposed in the dissertation improve the convergence by affecting the order of updates using a controller. Most previous studies have focused on monolithic structures. But it is known that the human brain has a bicameral nature at the gross level and it also has several specialized structures. In this dissertation, we investigate the computing characteristics of neural networks that are not monolithic being enhanced by a controller that can run algorithms that take advantage of the known global characteristics of the stored information. Such networks have been called bicameral neural networks. Stinson and Kak considered elementary bicameral models that used asynchronous control. New control methods, the method of iteration and bicameral classifier, are now proposed. The method of iteration uses the Hamming distance between the probe and the answer to control the convergence to a correct answer, whereas the bicameral classifier takes advantage of global characteristics using a clustering algorithm. The bicameral classifier is applied to two different models of equiprobable patterns as well as the more realistic situation where patterns can have different probabilities. The CCU algorithm has also been applied to a bidirectional associative memory with greatly improved performance. For multilayered networks, indexing of patterns to enhance system performance has been studied

    Biologically inspired evolutionary temporal neural circuits

    Get PDF
    Biological neural networks have always motivated creation of new artificial neural networks, and in this case a new autonomous temporal neural network system. Among the more challenging problems of temporal neural networks are the design and incorporation of short and long-term memories as well as the choice of network topology and training mechanism. In general, delayed copies of network signals can form short-term memory (STM), providing a limited temporal history of events similar to FIR filters, whereas the synaptic connection strengths as well as delayed feedback loops (ER circuits) can constitute longer-term memories (LTM). This dissertation introduces a new general evolutionary temporal neural network framework (GETnet) through automatic design of arbitrary neural networks with STM and LTM. GETnet is a step towards realization of general intelligent systems that need minimum or no human intervention and can be applied to a broad range of problems. GETnet utilizes nonlinear moving average/autoregressive nodes and sub-circuits that are trained by enhanced gradient descent and evolutionary search in terms of architecture, synaptic delay, and synaptic weight spaces. The mixture of Lamarckian and Darwinian evolutionary mechanisms facilitates the Baldwin effect and speeds up the hybrid training. The ability to evolve arbitrary adaptive time-delay connections enables GETnet to find novel answers to many classification and system identification tasks expressed in the general form of desired multidimensional input and output signals. Simulations using Mackey-Glass chaotic time series and fingerprint perspiration-induced temporal variations are given to demonstrate the above stated capabilities of GETnet

    Towards a continuous dynamic model of the Hopfield theory on neuronal interaction and memory storage

    Get PDF
    The purpose of this work is to study the Hopfield model for neuronal interaction and memory storage, in particular the convergence to the stored patterns. Since the hypothesis of symmetric synapses is not true for the brain, we will study how we can extend it to the case of asymmetric synapses using a probabilistic approach. We then focus on the description of another feature of the memory process and brain: oscillations. Using the Kuramoto model we will be able to describe them completely, gaining the presence of synchronization between neurons. Our aim is therefore to understand how and why neurons can be seen as oscillators and to establish a strong link between this model and the Hopfield approach

    Towards an Information Theoretic Framework for Evolutionary Learning

    Get PDF
    The vital essence of evolutionary learning consists of information flows between the environment and the entities differentially surviving and reproducing therein. Gain or loss of information in individuals and populations due to evolutionary steps should be considered in evolutionary algorithm theory and practice. Information theory has rarely been applied to evolutionary computation - a lacuna that this dissertation addresses, with an emphasis on objectively and explicitly evaluating the ensemble models implicit in evolutionary learning. Information theoretic functionals can provide objective, justifiable, general, computable, commensurate measures of fitness and diversity. We identify information transmission channels implicit in evolutionary learning. We define information distance metrics and indices for ensembles. We extend Price\u27s Theorem to non-random mating, give it an effective fitness interpretation and decompose it to show the key factors influencing heritability and evolvability. We argue that heritability and evolvability of our information theoretic indicators are high. We illustrate use of our indices for reproductive and survival selection. We develop algorithms to estimate information theoretic quantities on mixed continuous and discrete data via the empirical copula and information dimension. We extend statistical resampling. We present experimental and real world application results: chaotic time series prediction; parity; complex continuous functions; industrial process control; and small sample social science data. We formalize conjectures regarding evolutionary learning and information geometry
    • …
    corecore