2,610 research outputs found
Efficient forward propagation of time-sequences in convolutional neural networks using Deep Shifting
When a Convolutional Neural Network is used for on-the-fly evaluation of
continuously updating time-sequences, many redundant convolution operations are
performed. We propose the method of Deep Shifting, which remembers previously
calculated results of convolution operations in order to minimize the number of
calculations. The reduction in complexity is at least a constant and in the
best case quadratic. We demonstrate that this method does indeed save
significant computation time in a practical implementation, especially when the
networks receives a large number of time-frames
Efficient forward propagation of time-sequences in convolutional neural networks using Deep Shifting
When a Convolutional Neural Network is used for on-the-fly evaluation of
continuously updating time-sequences, many redundant convolution operations are
performed. We propose the method of Deep Shifting, which remembers previously
calculated results of convolution operations in order to minimize the number of
calculations. The reduction in complexity is at least a constant and in the
best case quadratic. We demonstrate that this method does indeed save
significant computation time in a practical implementation, especially when the
networks receives a large number of time-frames
Bio-Inspired Multi-Layer Spiking Neural Network Extracts Discriminative Features from Speech Signals
Spiking neural networks (SNNs) enable power-efficient implementations due to
their sparse, spike-based coding scheme. This paper develops a bio-inspired SNN
that uses unsupervised learning to extract discriminative features from speech
signals, which can subsequently be used in a classifier. The architecture
consists of a spiking convolutional/pooling layer followed by a fully connected
spiking layer for feature discovery. The convolutional layer of leaky,
integrate-and-fire (LIF) neurons represents primary acoustic features. The
fully connected layer is equipped with a probabilistic spike-timing-dependent
plasticity learning rule. This layer represents the discriminative features
through probabilistic, LIF neurons. To assess the discriminative power of the
learned features, they are used in a hidden Markov model (HMM) for spoken digit
recognition. The experimental results show performance above 96% that compares
favorably with popular statistical feature extraction methods. Our results
provide a novel demonstration of unsupervised feature acquisition in an SNN
ReConvNet: Video Object Segmentation with Spatio-Temporal Features Modulation
We introduce ReConvNet, a recurrent convolutional architecture for
semi-supervised video object segmentation that is able to fast adapt its
features to focus on any specific object of interest at inference time.
Generalization to new objects never observed during training is known to be a
hard task for supervised approaches that would need to be retrained. To tackle
this problem, we propose a more efficient solution that learns spatio-temporal
features self-adapting to the object of interest via conditional affine
transformations. This approach is simple, can be trained end-to-end and does
not necessarily require extra training steps at inference time. Our method
shows competitive results on DAVIS2016 with respect to state-of-the art
approaches that use online fine-tuning, and outperforms them on DAVIS2017.
ReConvNet shows also promising results on the DAVIS-Challenge 2018 winning the
-th position.Comment: CVPR Workshop - DAVIS Challenge 201
- …