228 research outputs found

    NASA JSC neural network survey results

    Get PDF
    A survey of Artificial Neural Systems in support of NASA's (Johnson Space Center) Automatic Perception for Mission Planning and Flight Control Research Program was conducted. Several of the world's leading researchers contributed papers containing their most recent results on artificial neural systems. These papers were broken into categories and descriptive accounts of the results make up a large part of this report. Also included is material on sources of information on artificial neural systems such as books, technical reports, software tools, etc

    Structural Plasticity and Associative Memory in Balanced Neural Networks With Spike-Time Dependent Inhibitory Plasticity

    Get PDF
    Several homeostatic mechanisms enable the brain to maintain desired levels of neuronal activity. One of these, homeostatic structural plasticity, has been reported to restore activity in networks disrupted by peripheral lesions by altering their neuronal connectivity. While multiple lesion experiments have studied the changes in neurite morphology that underlie modifications of synapses in these networks, the underlying mechanisms that drive these changes and the effects of the altered connectivity on network function are yet to be explained. Experimental evidence suggests that neuronal activity modulates neurite morphology and that it may stimulate neurites to selectively sprout or retract to restore network activity levels. In this study, a new spiking network model was developed to investigate these activity dependent growth regimes of neurites. Simulations of the model accurately reproduce network rewiring after peripheral lesions as reported in experiments. To ensure that these simulations closely resembled the behaviour of networks in the brain, a biologically realistic network model that exhibits low frequency Asynchronous Irregular (AI) activity as observed in cerebral cortex was deafferented. Furthermore, to study the functional effects of peripheral lesioning and subsequent network repair by homeostatic structural plasticity, associative memories were stored in the network and their recall performances before deafferentation and after, during the repair process, were compared. The simulation results indicate that the re-establishment of activity in neurons both within and outside the deprived region, the Lesion Projection Zone (LPZ), requires opposite activity dependent growth rules for excitatory and inhibitory post-synaptic elements. Analysis of these growth regimes indicates that they also contribute to the maintenance of activity levels in individual neurons. In this model, the directional formation of synapses that is observed in experiments requires that pre-synaptic excitatory and inhibitory elements also follow opposite growth rules. Furthermore, it was observed that the proposed model of homeostatic structural plasticity and the inhibitory synaptic plasticity mechanism that also balances the AI network are both necessary for successful rewiring. Next, even though average activity was restored to deprived neurons, these neurons did not retain their AI firing characteristics after repair. Finally, the recall performance of associative memories, which deteriorated after deafferentation, was not restored after network reorganisation

    A Model of Stimulus-Specific Neural Assemblies in the Insect Antennal Lobe

    Get PDF
    It has been proposed that synchronized neural assemblies in the antennal lobe of insects encode the identity of olfactory stimuli. In response to an odor, some projection neurons exhibit synchronous firing, phase-locked to the oscillations of the field potential, whereas others do not. Experimental data indicate that neural synchronization and field oscillations are induced by fast GABAA-type inhibition, but it remains unclear how desynchronization occurs. We hypothesize that slow inhibition plays a key role in desynchronizing projection neurons. Because synaptic noise is believed to be the dominant factor that limits neuronal reliability, we consider a computational model of the antennal lobe in which a population of oscillatory neurons interact through unreliable GABAA and GABAB inhibitory synapses. From theoretical analysis and extensive computer simulations, we show that transmission failures at slow GABAB synapses make the neural response unpredictable. Depending on the balance between GABAA and GABAB inputs, particular neurons may either synchronize or desynchronize. These findings suggest a wiring scheme that triggers stimulus-specific synchronized assemblies. Inhibitory connections are set by Hebbian learning and selectively activated by stimulus patterns to form a spiking associative memory whose storage capacity is comparable to that of classical binary-coded models. We conclude that fast inhibition acts in concert with slow inhibition to reformat the glomerular input into odor-specific synchronized neural assemblies

    Boolean Weightless Neural Network Architectures

    Get PDF
    A collection of hardware weightless Boolean elements has been developed. These form fundamental building blocks which have particular pertinence to the field of weightless neural networks. They have also been shown to have merit in their own right for the design of robust architectures. A major element of this is a collection of weightless Boolean sum and threshold techniques. These are fundamental building blocks which can be used in weightless architectures particularly within the field of weightless neural networks. Included in these is the implementation of L-max also known as N point thresholding. These elements have been applied to design a Boolean weightless hardware version of Austin’s ADAM neural network. ADAM is further enhanced by the addition of a new learning paradigm, that of non-Hebbian Learning. This new method concentrates on the association of ‘dis-similarity’, believing this is as important as areas of similarity. Image processing using hardware weightless neural networks is investigated through simulation of digital filters using a Type 1 Neuroram neuro-filter. Simulations have been performed using MATLAB to compare the results to a conventional median filter. Type 1 Neuroram has been tested on an extended collection of noise types. The importance of the threshold has been examined and the effect of cascading both types of filters was examined. This research has led to the development of several novel weightless hardware elements that can be applied to image processing. These patented elements include a weightless thermocoder and two weightless median filters. These novel robust high speed weightless filters have been compared with conventional median filters. The robustness of these architectures has been investigated when subjected to accelerated ground based generated neutron radiation simulating the atmospheric radiation spectrum experienced at commercial avionic altitudes. A trial investigating the resilience of weightless hardware Boolean elements in comparison to standard weighted arithmetic logic is detailed, examining the effects on the operation of the function when implemented on hardware experiencing high energy neutron bombardment induced single event effects. Further weightless Boolean elements are detailed which contribute to the development of a weightless implementation of the traditionally weighted self ordered map

    Analog Spiking Neuromorphic Circuits and Systems for Brain- and Nanotechnology-Inspired Cognitive Computing

    Get PDF
    Human society is now facing grand challenges to satisfy the growing demand for computing power, at the same time, sustain energy consumption. By the end of CMOS technology scaling, innovations are required to tackle the challenges in a radically different way. Inspired by the emerging understanding of the computing occurring in a brain and nanotechnology-enabled biological plausible synaptic plasticity, neuromorphic computing architectures are being investigated. Such a neuromorphic chip that combines CMOS analog spiking neurons and nanoscale resistive random-access memory (RRAM) using as electronics synapses can provide massive neural network parallelism, high density and online learning capability, and hence, paves the path towards a promising solution to future energy-efficient real-time computing systems. However, existing silicon neuron approaches are designed to faithfully reproduce biological neuron dynamics, and hence they are incompatible with the RRAM synapses, or require extensive peripheral circuitry to modulate a synapse, and are thus deficient in learning capability. As a result, they eliminate most of the density advantages gained by the adoption of nanoscale devices, and fail to realize a functional computing system. This dissertation describes novel hardware architectures and neuron circuit designs that synergistically assemble the fundamental and significant elements for brain-inspired computing. Versatile CMOS spiking neurons that combine integrate-and-fire, passive dense RRAM synapses drive capability, dynamic biasing for adaptive power consumption, in situ spike-timing dependent plasticity (STDP) and competitive learning in compact integrated circuit modules are presented. Real-world pattern learning and recognition tasks using the proposed architecture were demonstrated with circuit-level simulations. A test chip was implemented and fabricated to verify the proposed CMOS neuron and hardware architecture, and the subsequent chip measurement results successfully proved the idea. The work described in this dissertation realizes a key building block for large-scale integration of spiking neural network hardware, and then, serves as a step-stone for the building of next-generation energy-efficient brain-inspired cognitive computing systems

    Analysing and enhancing the performance of associative memory architectures

    Get PDF
    This thesis investigates the way in which information about the structure of a set of training data with 'natural' characteristics may be used to positively influence the design of associative memory neural network models of the Hopfield type. This is done with a view to reducing the level of connectivity in models of this type. There are three strands to this work. Firstly, an empirical evaluation of the implementation of existing theory is given. Secondly, a number of existing theories are combined to produce novel network models and training regimes. Thirdly, new strategies for constructing and training associative memories based on knowledge of the structure of the training data are proposed. The first conclusion of this work is that, under certain circumstances, performance benefits may be gained by establishing the connectivity in a non-random fashion, guided by the knowledge gained from the structure of the training data. These performance improvements exist in relation to networks in which sparse connectivity is established in a purely random manner. This dilution occurs prior to the training of the network. Secondly, it is verified that, as predicted by existing theory, targeted post-training dilution of network connectivity provides greater performance when compared with networks in which connections are removed at random. Finally, an existing tool for the analysis of the attractor performance of neural networks of this type has been modified and improved. Furthermore, a novel, comprehensive performance analysis tool is proposed

    Projective simulation for artificial intelligence

    Get PDF
    We propose a model of a learning agent whose interaction with the environment is governed by a simulation-based projection, which allows the agent to project itself into future situations before it takes real action. Projective simulation is based on a random walk through a network of clips, which are elementary patches of episodic memory. The network of clips changes dynamically, both due to new perceptual input and due to certain compositional principles of the simulation process. During simulation, the clips are screened for specific features which trigger factual action of the agent. The scheme is different from other, computational, notions of simulation, and it provides a new element in an embodied cognitive science approach to intelligent action and learning. Our model provides a natural route for generalization to quantum-mechanical operation and connects the fields of reinforcement learning and quantum computation.Comment: 22 pages, 18 figures. Close to published version, with footnotes retaine

    A Computational Model of the Lexical-Semantic System Based on a Grounded Cognition Approach

    Get PDF
    This work presents a connectionist model of the semantic-lexical system based on grounded cognition. The model assumes that the lexical and semantic aspects of language are memorized in two distinct stores. The semantic properties of objects are represented as a collection of features, whose number may vary among objects. Features are described as activation of neural oscillators in different sensory-motor areas (one area for each feature) topographically organized to implement a similarity principle. Lexical items are represented as activation of neural groups in a different layer. Lexical and semantic aspects are then linked together on the basis of previous experience, using physiological learning mechanisms. After training, features which frequently occurred together, and the corresponding word-forms, become linked via reciprocal excitatory synapses. The model also includes some inhibitory synapses: features in the semantic network tend to inhibit words not associated with them during the previous learning phase. Simulations show that after learning, presentation of a cue can evoke the overall object and the corresponding word in the lexical area. Moreover, different objects and the corresponding words can be simultaneously retrieved and segmented via a time division in the gamma-band. Word presentation, in turn, activates the corresponding features in the sensory-motor areas, recreating the same conditions occurring during learning. The model simulates the formation of categories, assuming that objects belong to the same category if they share some features. Simple exempla are shown to illustrate how words representing a category can be distinguished from words representing individual members. Finally, the model can be used to simulate patients with focalized lesions, assuming an impairment of synaptic strength in specific feature areas
    • …
    corecore