6,804 research outputs found
Attention Clusters: Purely Attention Based Local Feature Integration for Video Classification
Recently, substantial research effort has focused on how to apply CNNs or
RNNs to better extract temporal patterns from videos, so as to improve the
accuracy of video classification. In this paper, however, we show that temporal
information, especially longer-term patterns, may not be necessary to achieve
competitive results on common video classification datasets. We investigate the
potential of a purely attention based local feature integration. Accounting for
the characteristics of such features in video classification, we propose a
local feature integration framework based on attention clusters, and introduce
a shifting operation to capture more diverse signals. We carefully analyze and
compare the effect of different attention mechanisms, cluster sizes, and the
use of the shifting operation, and also investigate the combination of
attention clusters for multimodal integration. We demonstrate the effectiveness
of our framework on three real-world video classification datasets. Our model
achieves competitive results across all of these. In particular, on the
large-scale Kinetics dataset, our framework obtains an excellent single model
accuracy of 79.4% in terms of the top-1 and 94.0% in terms of the top-5
accuracy on the validation set. The attention clusters are the backbone of our
winner solution at ActivityNet Kinetics Challenge 2017. Code and models will be
released soon.Comment: The backbone of the winner solution at ActivityNet Kinetics Challenge
201
DeepSignals: Predicting Intent of Drivers Through Visual Signals
Detecting the intention of drivers is an essential task in self-driving,
necessary to anticipate sudden events like lane changes and stops. Turn signals
and emergency flashers communicate such intentions, providing seconds of
potentially critical reaction time. In this paper, we propose to detect these
signals in video sequences by using a deep neural network that reasons about
both spatial and temporal information. Our experiments on more than a million
frames show high per-frame accuracy in very challenging scenarios.Comment: To be presented at the IEEE International Conference on Robotics and
Automation (ICRA), 201
Surgical Phase Recognition of Short Video Shots Based on Temporal Modeling of Deep Features
Recognizing the phases of a laparoscopic surgery (LS) operation form its
video constitutes a fundamental step for efficient content representation,
indexing and retrieval in surgical video databases. In the literature, most
techniques focus on phase segmentation of the entire LS video using
hand-crafted visual features, instrument usage signals, and recently
convolutional neural networks (CNNs). In this paper we address the problem of
phase recognition of short video shots (10s) of the operation, without
utilizing information about the preceding/forthcoming video frames, their phase
labels or the instruments used. We investigate four state-of-the-art CNN
architectures (Alexnet, VGG19, GoogleNet, and ResNet101), for feature
extraction via transfer learning. Visual saliency was employed for selecting
the most informative region of the image as input to the CNN. Video shot
representation was based on two temporal pooling mechanisms. Most importantly,
we investigate the role of 'elapsed time' (from the beginning of the
operation), and we show that inclusion of this feature can increase performance
dramatically (69% vs. 75% mean accuracy). Finally, a long short-term memory
(LSTM) network was trained for video shot classification based on the fusion of
CNN features with 'elapsed time', increasing the accuracy to 86%. Our results
highlight the prominent role of visual saliency, long-range temporal recursion
and 'elapsed time' (a feature so far ignored), for surgical phase recognition.Comment: 6 pages, 4 figures, 6 table
- …