2 research outputs found

    Sparsity in Reservoir Computing Neural Networks

    Get PDF
    Reservoir Computing (RC) is a well-known strategy for designing Recurrent Neural Networks featured by striking efficiency of training. The crucial aspect of RC is to properly instantiate the hidden recurrent layer that serves as dynamical memory to the system. In this respect, the common recipe is to create a pool of randomly and sparsely connected recurrent neurons. While the aspect of sparsity in the design of RC systems has been debated in the literature, it is nowadays understood mainly as a way to enhance the efficiency of computation, exploiting sparse matrix operations. In this paper, we empirically investigate the role of sparsity in RC network design under the perspective of the richness of the developed temporal representations. We analyze both sparsity in the recurrent connections, and in the connections from the input to the reservoir. Our results point out that sparsity, in particular in input-reservoir connections, has a major role in developing internal temporal representations that have a longer short-term memory of past inputs and a higher dimension.Comment: This paper is currently under revie

    Memristor-Based Analog Neuromorphic Computing Engine Design and Robust Training Scheme

    Get PDF
    The invention of neuromorphic computing architecture is inspired by the working mechanism of human-brain. Memristor technology revitalized neuromorphic computing system design by efficiently executing the analog Matrix-Vector multiplication on the memristor-based crossbar (MBC) structure. In this work, we propose a memristor crossbar-based embedded platform for neuromorphic computing system. A variety of neural network algorithms with threshold activation function can be easily implemented on our platform. However, programming the MBC to the target state can be very challenging due to the difficulty to real-time monitor the memristor state during the training. In this thesis, we quantitatively analyzed the sensitivity of the MBC programming to the process variations and input signal noise. We then proposed a noise-eliminating training method on top of a new crossbar structure to minimize the noise accumulation during the MBC training and improve the trained system performance, i.e., the pattern recall rate. A digital-assisted initialization step for MBC training is also introduced to reduce the training failure rate as well as the training time. We also proposed a memristor-based bidirectional transmission exhibition/inhibition synapse and implemented neuromorphic computing demonstration with our proposed synapse. Experiment results show that the proposed design has high tolerance on process variation and input noise. Different benefits of MBC system and new synapse-based system will be compared in our thesis
    corecore