136,247 research outputs found

    Multi-scale 3D Convolution Network for Video Based Person Re-Identification

    Full text link
    This paper proposes a two-stream convolution network to extract spatial and temporal cues for video based person Re-Identification (ReID). A temporal stream in this network is constructed by inserting several Multi-scale 3D (M3D) convolution layers into a 2D CNN network. The resulting M3D convolution network introduces a fraction of parameters into the 2D CNN, but gains the ability of multi-scale temporal feature learning. With this compact architecture, M3D convolution network is also more efficient and easier to optimize than existing 3D convolution networks. The temporal stream further involves Residual Attention Layers (RAL) to refine the temporal features. By jointly learning spatial-temporal attention masks in a residual manner, RAL identifies the discriminative spatial regions and temporal cues. The other stream in our network is implemented with a 2D CNN for spatial feature extraction. The spatial and temporal features from two streams are finally fused for the video based person ReID. Evaluations on three widely used benchmarks datasets, i.e., MARS, PRID2011, and iLIDS-VID demonstrate the substantial advantages of our method over existing 3D convolution networks and state-of-art methods.Comment: AAAI, 201

    Neural network image reconstruction for magnetic particle imaging

    Full text link
    We investigate neural network image reconstruction for magnetic particle imaging. The network performance depends strongly on the convolution effects of the spectrum input data. The larger convolution effect appearing at a relatively smaller nanoparticle size obstructs the network training. The trained single-layer network reveals the weighting matrix consisted of a basis vector in the form of Chebyshev polynomials of the second kind. The weighting matrix corresponds to an inverse system matrix, where an incoherency of basis vectors due to a low convolution effects as well as a nonlinear activation function plays a crucial role in retrieving the matrix elements. Test images are well reconstructed through trained networks having an inverse kernel matrix. We also confirm that a multi-layer network with one hidden layer improves the performance. The architecture of a neural network overcoming the low incoherence of the inverse kernel through the classification property will become a better tool for image reconstruction.Comment: 9 pages, 11 figure

    Performance optimization of convolution calculation by blocking and sparsity on GPU

    Full text link
    Convolution neural network (CNN) plays a paramount role in machine learning, which has made significant contributions in medical image classification, natural language processing, recommender system and so on. A successful convolution neural network can achieve excellent performance with fast execution time. The convolution operation dominates the total operation time of convolution neural network. Therefore, in this paper, we propose a novel convolution method on Graphic Processing Units (GPUs), which reduces the convolution operation time and improves the execution speed by approximately 2X than the state of the art convolution algorithm. Our work is based on the observation that the sparsity of the input feature map of convolution operation is relatively large, and the zero value of the feature map is redundancy for convolution result. Therefore, we skip the zero value calculation and improve the speed by compressing the feature map. Besides, the shape of the feature map for the deep network is small, and the number of threads is limited. Therefore, for a limited number of threads, it is necessary to reduce the amount of calculation to increase the calculation speed. Our algorithm has a good effect on the convolution operation for the feature map of the deep network with large sparsity and small size

    Live Demonstration: Neuromorphic Row-by-Row Multi-convolution FPGA Processor-SpiNNaker architecture for Dynamic-Vision Feature Extraction

    Get PDF
    In this demonstration a spiking neural network architecture for vision recognition using an FPGA spiking convolution processor, based on leaky integrate and fire neurons (LIF) and a SpiNNaker board is presented. The network has been trained with Poker-DVS dataset in order to classify the four different card symbols. The spiking convolution processor extracts features from images in form of spikes, computes by one layer of 64 convolutions. These features are sent to an OKAERtool board that converts from AER to 2-7 protocol to be classified by a spiking neural network deployed on a SpiNNaker platform
    corecore