19,396 research outputs found
Estimating Post-Synaptic Effects for Online Training of Feed-Forward SNNs
Facilitating online learning in spiking neural networks (SNNs) is a key step
in developing event-based models that can adapt to changing environments and
learn from continuous data streams in real-time. Although forward-mode
differentiation enables online learning, its computational requirements
restrict scalability. This is typically addressed through approximations that
limit learning in deep models. In this study, we propose Online Training with
Postsynaptic Estimates (OTPE) for training feed-forward SNNs, which
approximates Real-Time Recurrent Learning (RTRL) by incorporating temporal
dynamics not captured by current approximations, such as Online Training
Through Time (OTTT) and Online Spatio-Temporal Learning (OSTL). We show
improved scaling for multi-layer networks using a novel approximation of
temporal effects on the subsequent layer's activity. This approximation incurs
minimal overhead in the time and space complexity compared to similar
algorithms, and the calculation of temporal effects remains local to each
layer. We characterize the learning performance of our proposed algorithms on
multiple SNN model configurations for rate-based and time-based encoding. OTPE
exhibits the highest directional alignment to exact gradients, calculated with
backpropagation through time (BPTT), in deep networks and, on time-based
encoding, outperforms other approximate methods. We also observe sizeable gains
in average performance over similar algorithms in offline training of Spiking
Heidelberg Digits with equivalent hyper-parameters (OTTT/OSTL - 70.5%; OTPE -
75.2%; BPTT - 78.1%)
Spatio-Temporal Multimedia Big Data Analytics Using Deep Neural Networks
With the proliferation of online services and mobile technologies, the world has stepped into a multimedia big data era, where new opportunities and challenges appear with the high diversity multimedia data together with the huge amount of social data. Nowadays, multimedia data consisting of audio, text, image, and video has grown tremendously. With such an increase in the amount of multimedia data, the main question raised is how one can analyze this high volume and variety of data in an efficient and effective way. A vast amount of research work has been done in the multimedia area, targeting different aspects of big data analytics, such as the capture, storage, indexing, mining, and retrieval of multimedia big data. However, there is insufficient research that provides a comprehensive framework for multimedia big data analytics and management.
To address the major challenges in this area, a new framework is proposed based on deep neural networks for multimedia semantic concept detection with a focus on spatio-temporal information analysis and rare event detection. The proposed framework is able to discover the pattern and knowledge of multimedia data using both static deep data representation and temporal semantics. Specifically, it is designed to handle data with skewed distributions. The proposed framework includes the following components: (1) a synthetic data generation component based on simulation and adversarial networks for data augmentation and deep learning training, (2) an automatic sampling model to overcome the imbalanced data issue in multimedia data, (3) a deep representation learning model leveraging novel deep learning techniques to generate the most discriminative static features from multimedia data, (4) an automatic hyper-parameter learning component for faster training and convergence of the learning models, (5) a spatio-temporal deep learning model to analyze dynamic features from multimedia data, and finally (6) a multimodal deep learning fusion model to integrate different data modalities. The whole framework has been evaluated using various large-scale multimedia datasets that include the newly collected disaster-events video dataset and other public datasets
Am I Done? Predicting Action Progress in Videos
In this paper we deal with the problem of predicting action progress in
videos. We argue that this is an extremely important task since it can be
valuable for a wide range of interaction applications. To this end we introduce
a novel approach, named ProgressNet, capable of predicting when an action takes
place in a video, where it is located within the frames, and how far it has
progressed during its execution. To provide a general definition of action
progress, we ground our work in the linguistics literature, borrowing terms and
concepts to understand which actions can be the subject of progress estimation.
As a result, we define a categorization of actions and their phases. Motivated
by the recent success obtained from the interaction of Convolutional and
Recurrent Neural Networks, our model is based on a combination of the Faster
R-CNN framework, to make frame-wise predictions, and LSTM networks, to estimate
action progress through time. After introducing two evaluation protocols for
the task at hand, we demonstrate the capability of our model to effectively
predict action progress on the UCF-101 and J-HMDB datasets
Convolutional Drift Networks for Video Classification
Analyzing spatio-temporal data like video is a challenging task that requires
processing visual and temporal information effectively. Convolutional Neural
Networks have shown promise as baseline fixed feature extractors through
transfer learning, a technique that helps minimize the training cost on visual
information. Temporal information is often handled using hand-crafted features
or Recurrent Neural Networks, but this can be overly specific or prohibitively
complex. Building a fully trainable system that can efficiently analyze
spatio-temporal data without hand-crafted features or complex training is an
open challenge. We present a new neural network architecture to address this
challenge, the Convolutional Drift Network (CDN). Our CDN architecture combines
the visual feature extraction power of deep Convolutional Neural Networks with
the intrinsically efficient temporal processing provided by Reservoir
Computing. In this introductory paper on the CDN, we provide a very simple
baseline implementation tested on two egocentric (first-person) video activity
datasets.We achieve video-level activity classification results on-par with
state-of-the art methods. Notably, performance on this complex spatio-temporal
task was produced by only training a single feed-forward layer in the CDN.Comment: Published in IEEE Rebooting Computin
- …