20,327 research outputs found

    Toward bio-inspired information processing with networks of nano-scale switching elements

    Full text link
    Unconventional computing explores multi-scale platforms connecting molecular-scale devices into networks for the development of scalable neuromorphic architectures, often based on new materials and components with new functionalities. We review some work investigating the functionalities of locally connected networks of different types of switching elements as computational substrates. In particular, we discuss reservoir computing with networks of nonlinear nanoscale components. In usual neuromorphic paradigms, the network synaptic weights are adjusted as a result of a training/learning process. In reservoir computing, the non-linear network acts as a dynamical system mixing and spreading the input signals over a large state space, and only a readout layer is trained. We illustrate the most important concepts with a few examples, featuring memristor networks with time-dependent and history dependent resistances

    Magnetic Cellular Nonlinear Network with Spin Wave Bus for Image Processing

    Full text link
    We describe and analyze a cellular nonlinear network based on magnetic nanostructures for image processing. The network consists of magneto-electric cells integrated onto a common ferromagnetic film - spin wave bus. The magneto-electric cell is an artificial two-phase multiferroic structure comprising piezoelectric and ferromagnetic materials. A bit of information is assigned to the cell's magnetic polarization, which can be controlled by the applied voltage. The information exchange among the cells is via the spin waves propagating in the spin wave bus. Each cell changes its state as a combined effect of two: the magneto-electric coupling and the interaction with the spin waves. The distinct feature of the network with spin wave bus is the ability to control the inter-cell communication by an external global parameter - magnetic field. The latter makes possible to realize different image processing functions on the same template without rewiring or reconfiguration. We present the results of numerical simulations illustrating image filtering, erosion, dilation, horizontal and vertical line detection, inversion and edge detection accomplished on one template by the proper choice of the strength and direction of the external magnetic field. We also present numerical assets on the major network parameters such as cell density, power dissipation and functional throughput, and compare them with the parameters projected for other nano-architectures such as CMOL-CrossNet, Quantum Dot Cellular Automata, and Quantum Dot Image Processor. Potentially, the utilization of spin waves phenomena at the nanometer scale may provide a route to low-power consuming and functional logic circuits for special task data processing

    Memory and information processing in neuromorphic systems

    Full text link
    A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed neuromorphic computing platforms and system

    A neuromorphic systems approach to in-memory computing with non-ideal memristive devices: From mitigation to exploitation

    Full text link
    Memristive devices represent a promising technology for building neuromorphic electronic systems. In addition to their compactness and non-volatility features, they are characterized by computationally relevant physical properties, such as state-dependence, non-linear conductance changes, and intrinsic variability in both their switching threshold and conductance values, that make them ideal devices for emulating the bio-physics of real synapses. In this paper we present a spiking neural network architecture that supports the use of memristive devices as synaptic elements, and propose mixed-signal analog-digital interfacing circuits which mitigate the effect of variability in their conductance values and exploit their variability in the switching threshold, for implementing stochastic learning. The effect of device variability is mitigated by using pairs of memristive devices configured in a complementary push-pull mechanism and interfaced to a current-mode normalizer circuit. The stochastic learning mechanism is obtained by mapping the desired change in synaptic weight into a corresponding switching probability that is derived from the intrinsic stochastic behavior of memristive devices. We demonstrate the features of the CMOS circuits and apply the architecture proposed to a standard neural network hand-written digit classification benchmark based on the MNIST data-set. We evaluate the performance of the approach proposed on this benchmark using behavioral-level spiking neural network simulation, showing both the effect of the reduction in conductance variability produced by the current-mode normalizer circuit, and the increase in performance as a function of the number of memristive devices used in each synapse.Comment: 13 pages, 12 figures, accepted for Faraday Discussion
    • …
    corecore