50 research outputs found

    Capacity, Fidelity, and Noise Tolerance of Associative Spatial-Temporal Memories Based on Memristive Neuromorphic Network

    Full text link
    We have calculated the key characteristics of associative (content-addressable) spatial-temporal memories based on neuromorphic networks with restricted connectivity - "CrossNets". Such networks may be naturally implemented in nanoelectronic hardware using hybrid CMOS/memristor circuits, which may feature extremely high energy efficiency, approaching that of biological cortical circuits, at much higher operation speed. Our numerical simulations, in some cases confirmed by analytical calculations, have shown that the characteristics depend substantially on the method of information recording into the memory. Of the four methods we have explored, two look especially promising - one based on the quadratic programming, and the other one being a specific discrete version of the gradient descent. The latter method provides a slightly lower memory capacity (at the same fidelity) then the former one, but it allows local recording, which may be more readily implemented in nanoelectronic hardware. Most importantly, at the synchronous retrieval, both methods provide a capacity higher than that of the well-known Ternary Content-Addressable Memories with the same number of nonvolatile memory cells (e.g., memristors), though the input noise immunity of the CrossNet memories is somewhat lower

    High-Speed CMOS-Free Purely Spintronic Asynchronous Recurrent Neural Network

    Full text link
    Neuromorphic computing systems overcome the limitations of traditional von Neumann computing architectures. These computing systems can be further improved upon by using emerging technologies that are more efficient than CMOS for neural computation. Recent research has demonstrated memristors and spintronic devices in various neural network designs boost efficiency and speed. This paper presents a biologically inspired fully spintronic neuron used in a fully spintronic Hopfield RNN. The network is used to solve tasks, and the results are compared against those of current Hopfield neuromorphic architectures which use emerging technologies

    Vector Symbolic Finite State Machines in Attractor Neural Networks

    Get PDF
    Hopfield attractor networks are robust distributed models of human memory. We propose construction rules such that an attractor network may implement an arbitrary finite state machine (FSM), where states and stimuli are represented by high-dimensional random bipolar vectors, and all state transitions are enacted by the attractor network's dynamics. Numerical simulations show the capacity of the model, in terms of the maximum size of implementable FSM, to be linear in the size of the attractor network. We show that the model is robust to imprecise and noisy weights, and so a prime candidate for implementation with high-density but unreliable devices. By endowing attractor networks with the ability to emulate arbitrary FSMs, we propose a plausible path by which FSMs may exist as a distributed computational primitive in biological neural networks

    modeling and simulation of spiking neural networks with resistive switching synapses

    Get PDF
    Artificial intelligence (AI) has recently reached excellent achievements in the implementation of human brain cognitive functions such as learning, recognition and inference by running intensively neural networks with deep learning on high-performance computing platforms. However, excessive computational time and power consumption required for achieving such performance make AI inefficient compared with human brain. To replicate the efficient operation of human brain in hardware, novel nanoscale memory devices such as resistive switching random access memory (RRAM) have attracted strong interest thanks to their ability to mimic biological learning in silico. In this chapter, design, modeling and simulation of RRAM-based electronic synapses capable of emulating biological learning rules are first presented. Then, the application of RRAM synapses in spiking neural networks to achieve neuromorphic tasks such as on-line learning of images and associative learning is addressed

    Review of Neural Network Algorithms

    Get PDF
    The artificial neural network is the core tool of machine learning to realize intelligence. It has shown its advantages in the fields of sound, image, sound, picture, and so on. Since entering the 21st century, the progress of science and technology and people\u27s pursuit of artificial intelligence have introduced the research of artificial neural networks into an upsurge. Firstly, this paper introduces the application background and development process of the artificial neural network in order to clarify the research context of neural networks. Five branches and related applications of single-layer perceptron, linear neural network, BP neural network, Hopfield neural network, and depth neural network are analyzed in detail. The analysis shows that the development trend of the artificial neural network is developing towards a more general, flexible, and intelligent direction. Finally, the future development of the artificial neural network in training mode, learning mode, function expansion, and technology combination has prospected

    Brain-like associative learning using a nanoscale non-volatile phase change synaptic device array

    Get PDF
    Recent advances in neuroscience together with nanoscale electronic device technology have resulted in huge interests in realizing brain-like computing hardwares using emerging nanoscale memory devices as synaptic elements. Although there has been experimental work that demonstrated the operation of nanoscale synaptic element at the single device level, network level studies have been limited to simulations. In this work, we demonstrate, using experiments, array level associative learning using phase change synaptic devices connected in a grid like configuration similar to the organization of the biological brain. Implementing Hebbian learning with phase change memory cells, the synaptic grid was able to store presented patterns and recall missing patterns in an associative brain-like fashion. We found that the system is robust to device variations, and large variations in cell resistance states can be accommodated by increasing the number of training epochs. We illustrated the tradeoff between variation tolerance of the network and the overall energy consumption, and found that energy consumption is decreased significantly for lower variation tolerance.Comment: Original article can be found here: http://journal.frontiersin.org/Journal/10.3389/fnins.2014.00205/abstrac
    corecore