16,286 research outputs found
Convolutional Drift Networks for Video Classification
Analyzing spatio-temporal data like video is a challenging task that requires
processing visual and temporal information effectively. Convolutional Neural
Networks have shown promise as baseline fixed feature extractors through
transfer learning, a technique that helps minimize the training cost on visual
information. Temporal information is often handled using hand-crafted features
or Recurrent Neural Networks, but this can be overly specific or prohibitively
complex. Building a fully trainable system that can efficiently analyze
spatio-temporal data without hand-crafted features or complex training is an
open challenge. We present a new neural network architecture to address this
challenge, the Convolutional Drift Network (CDN). Our CDN architecture combines
the visual feature extraction power of deep Convolutional Neural Networks with
the intrinsically efficient temporal processing provided by Reservoir
Computing. In this introductory paper on the CDN, we provide a very simple
baseline implementation tested on two egocentric (first-person) video activity
datasets.We achieve video-level activity classification results on-par with
state-of-the art methods. Notably, performance on this complex spatio-temporal
task was produced by only training a single feed-forward layer in the CDN.Comment: Published in IEEE Rebooting Computin
Spatio-Temporal Relation and Attention Learning for Facial Action Unit Detection
Spatio-temporal relations among facial action units (AUs) convey significant
information for AU detection yet have not been thoroughly exploited. The main
reasons are the limited capability of current AU detection works in
simultaneously learning spatial and temporal relations, and the lack of precise
localization information for AU feature learning. To tackle these limitations,
we propose a novel spatio-temporal relation and attention learning framework
for AU detection. Specifically, we introduce a spatio-temporal graph
convolutional network to capture both spatial and temporal relations from
dynamic AUs, in which the AU relations are formulated as a spatio-temporal
graph with adaptively learned instead of predefined edge weights. Moreover, the
learning of spatio-temporal relations among AUs requires individual AU
features. Considering the dynamism and shape irregularity of AUs, we propose an
attention regularization method to adaptively learn regional attentions that
capture highly relevant regions and suppress irrelevant regions so as to
extract a complete feature for each AU. Extensive experiments show that our
approach achieves substantial improvements over the state-of-the-art AU
detection methods on BP4D and especially DISFA benchmarks
A Deep Spatio-Temporal Fuzzy Neural Network for Passenger Demand Prediction
In spite of its importance, passenger demand prediction is a highly
challenging problem, because the demand is simultaneously influenced by the
complex interactions among many spatial and temporal factors and other external
factors such as weather. To address this problem, we propose a Spatio-TEmporal
Fuzzy neural Network (STEF-Net) to accurately predict passenger demands
incorporating the complex interactions of all known important factors. We
design an end-to-end learning framework with different neural networks modeling
different factors. Specifically, we propose to capture spatio-temporal feature
interactions via a convolutional long short-term memory network and model
external factors via a fuzzy neural network that handles data uncertainty
significantly better than deterministic methods. To keep the temporal relations
when fusing two networks and emphasize discriminative spatio-temporal feature
interactions, we employ a novel feature fusion method with a convolution
operation and an attention layer. As far as we know, our work is the first to
fuse a deep recurrent neural network and a fuzzy neural network to model
complex spatial-temporal feature interactions with additional uncertain input
features for predictive learning. Experiments on a large-scale real-world
dataset show that our model achieves more than 10% improvement over the
state-of-the-art approaches.Comment: https://epubs.siam.org/doi/abs/10.1137/1.9781611975673.1
Tube Convolutional Neural Network (T-CNN) for Action Detection in Videos
Deep learning has been demonstrated to achieve excellent results for image
classification and object detection. However, the impact of deep learning on
video analysis (e.g. action detection and recognition) has been limited due to
complexity of video data and lack of annotations. Previous convolutional neural
networks (CNN) based video action detection approaches usually consist of two
major steps: frame-level action proposal detection and association of proposals
across frames. Also, these methods employ two-stream CNN framework to handle
spatial and temporal feature separately. In this paper, we propose an
end-to-end deep network called Tube Convolutional Neural Network (T-CNN) for
action detection in videos. The proposed architecture is a unified network that
is able to recognize and localize action based on 3D convolution features. A
video is first divided into equal length clips and for each clip a set of tube
proposals are generated next based on 3D Convolutional Network (ConvNet)
features. Finally, the tube proposals of different clips are linked together
employing network flow and spatio-temporal action detection is performed using
these linked video proposals. Extensive experiments on several video datasets
demonstrate the superior performance of T-CNN for classifying and localizing
actions in both trimmed and untrimmed videos compared to state-of-the-arts
- …