7 research outputs found

    Motion learning using spatio-temporal neural network

    Get PDF
    Motion trajectory prediction is one of the key areas in behaviour and surveillance studies. Many related successful applications have been reported in the literature. However, most of the studies are based on sigmoidal neural networks in which some dynamic properties of the data are overlooked due to the absence of spatiotemporal encoding functionalities. Even though some sequential (motion) learning studies have been proposed using spatiotemporal neural networks, as in those sigmoidal neural networks, the approach used is mainly supervised learning. In such learning, it requires a target signal, in which this is not always available in some applications. For this study, motion learning using spatio temporal neural network is proposed. The learning is based on reward-modulated spike-timing-dependent plasticity (STDP), whereby the learning weight adjustment provided by the standard STDP is modulated by the reinforcement. The implementation of reinforcement approach for motion trajectory can be regarded as a major contribution of this study. In this study, learning is implemented on a reward basis without the need for learning targets.The algorithm has shown good potential in learning motion trajectory particularly in noisy and dynamic settings. Furthermore, the learning uses generic neural network architecture, which makes learning adaptable for many applications

    A Retinotopic Spiking Neural Network System for Accurate Recognition of Moving Objects Using NeuCube and Dynamic Vision Sensors

    No full text
    This paper introduces a new system for dynamic visual recognition that combines bio-inspired hardware with a brain-like spiking neural network. The system is designed to take data from a dynamic vision sensor (DVS) that simulates the functioning of the human retina by producing an address event output (spike trains) based on the movement of objects. The system then convolutes the spike trains and feeds them into a brain-like spiking neural network, called NeuCube, which is organized in a three-dimensional manner, representing the organization of the primary visual cortex. Spatio-temporal patterns of the data are learned during a deep unsupervised learning stage, using spike-timing-dependent plasticity. In a second stage, supervised learning is performed to train the network for classification tasks. The convolution algorithm and the mapping into the network mimic the function of retinal ganglion cells and the retinotopic organization of the visual cortex. The NeuCube architecture can be used to visualize the deep connectivity inside the network before, during, and after training and thereby allows for a better understanding of the learning processes. The method was tested on the benchmark MNIST-DVS dataset and achieved a classification accuracy of 92.90%. The paper discusses advantages and limitations of the new method and concludes that it is worth exploring further on different datasets, aiming for advances in dynamic computer vision and multimodal systems that integrate visual, aural, tactile, and other kinds of information in a biologically plausible way
    corecore