27,018 research outputs found
Exploiting Image-trained CNN Architectures for Unconstrained Video Classification
We conduct an in-depth exploration of different strategies for doing event
detection in videos using convolutional neural networks (CNNs) trained for
image classification. We study different ways of performing spatial and
temporal pooling, feature normalization, choice of CNN layers as well as choice
of classifiers. Making judicious choices along these dimensions led to a very
significant increase in performance over more naive approaches that have been
used till now. We evaluate our approach on the challenging TRECVID MED'14
dataset with two popular CNN architectures pretrained on ImageNet. On this
MED'14 dataset, our methods, based entirely on image-trained CNN features, can
outperform several state-of-the-art non-CNN models. Our proposed late fusion of
CNN- and motion-based features can further increase the mean average precision
(mAP) on MED'14 from 34.95% to 38.74%. The fusion approach achieves the
state-of-the-art classification performance on the challenging UCF-101 dataset
Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset
The paucity of videos in current action classification datasets (UCF-101 and
HMDB-51) has made it difficult to identify good video architectures, as most
methods obtain similar performance on existing small-scale benchmarks. This
paper re-evaluates state-of-the-art architectures in light of the new Kinetics
Human Action Video dataset. Kinetics has two orders of magnitude more data,
with 400 human action classes and over 400 clips per class, and is collected
from realistic, challenging YouTube videos. We provide an analysis on how
current architectures fare on the task of action classification on this dataset
and how much performance improves on the smaller benchmark datasets after
pre-training on Kinetics.
We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on
2D ConvNet inflation: filters and pooling kernels of very deep image
classification ConvNets are expanded into 3D, making it possible to learn
seamless spatio-temporal feature extractors from video while leveraging
successful ImageNet architecture designs and even their parameters. We show
that, after pre-training on Kinetics, I3D models considerably improve upon the
state-of-the-art in action classification, reaching 80.9% on HMDB-51 and 98.0%
on UCF-101.Comment: Removed references to mini-kinetics dataset that was never made
publicly available and repeated all experiments on the full Kinetics datase
General-purpose and special-purpose visual systems
The information that eyes supply supports a wide variety of functions, from the guidance systems that enable an animal to navigate successfully around the environment, to the detection and identification of predators, prey, and conspecifics. The eyes with which we are most familiar the single-chambered eyes of vertebrates and cephalopod molluscs, and the compound eyes of insects and higher crustaceans allow these animals to perform the full range of visual tasks. These eyes have evidently evolved in conjunction with brains that are capable of subjecting the raw visual information to many different kinds of analysis, depending on the nature of the task that the animal is engaged in. However, not all eyes evolved to provide such comprehensive information. For example, in bivalve molluscs we find eyes of very varied design (pinholes, concave mirrors, and apposition compound eyes) whose only function is to detect approaching predators and thereby allow the animal to protect itself by closing its shell. Thus, there are special-purpose eyes as well as eyes with multiple functions
Segmental Spatiotemporal CNNs for Fine-grained Action Segmentation
Joint segmentation and classification of fine-grained actions is important
for applications of human-robot interaction, video surveillance, and human
skill evaluation. However, despite substantial recent progress in large-scale
action classification, the performance of state-of-the-art fine-grained action
recognition approaches remains low. We propose a model for action segmentation
which combines low-level spatiotemporal features with a high-level segmental
classifier. Our spatiotemporal CNN is comprised of a spatial component that
uses convolutional filters to capture information about objects and their
relationships, and a temporal component that uses large 1D convolutional
filters to capture information about how object relationships change across
time. These features are used in tandem with a semi-Markov model that models
transitions from one action to another. We introduce an efficient constrained
segmental inference algorithm for this model that is orders of magnitude faster
than the current approach. We highlight the effectiveness of our Segmental
Spatiotemporal CNN on cooking and surgical action datasets for which we observe
substantially improved performance relative to recent baseline methods.Comment: Updated from the ECCV 2016 version. We fixed an important
mathematical error and made the section on segmental inference cleare
Sparse Coding Predicts Optic Flow Specificities of Zebrafish Pretectal Neurons
Zebrafish pretectal neurons exhibit specificities for large-field optic flow
patterns associated with rotatory or translatory body motion. We investigate
the hypothesis that these specificities reflect the input statistics of natural
optic flow. Realistic motion sequences were generated using computer graphics
simulating self-motion in an underwater scene. Local retinal motion was
estimated with a motion detector and encoded in four populations of
directionally tuned retinal ganglion cells, represented as two signed input
variables. This activity was then used as input into one of two learning
networks: a sparse coding network (competitive learning) and backpropagation
network (supervised learning). Both simulations develop specificities for optic
flow which are comparable to those found in a neurophysiological study (Kubo et
al. 2014), and relative frequencies of the various neuronal responses are best
modeled by the sparse coding approach. We conclude that the optic flow neurons
in the zebrafish pretectum do reflect the optic flow statistics. The predicted
vectorial receptive fields show typical optic flow fields but also "Gabor" and
dipole-shaped patterns that likely reflect difference fields needed for
reconstruction by linear superposition.Comment: Published Conference Paper from ICANN 2018, Rhode
Fusion of Learned Multi-Modal Representations and Dense Trajectories for Emotional Analysis in Videos
When designing a video affective content analysis algorithm, one of the most important steps is the selection of discriminative features for the effective representation of video segments. The majority of existing affective content analysis methods either use low-level audio-visual features or generate handcrafted higher level representations based on these low-level features. We propose in this work to use deep learning methods, in particular convolutional neural networks (CNNs), in order to automatically learn and extract mid-level representations from raw data. To this end, we exploit the audio and visual modality of videos by employing Mel-Frequency Cepstral Coefficients (MFCC) and color values in the HSV color space. We also incorporate dense trajectory based motion features in order to further enhance the performance of the analysis. By means of multi-class support vector machines (SVMs) and fusion mechanisms, music video clips are classified into one of four affective categories representing the four quadrants of the Valence-Arousal (VA) space. Results obtained on a subset of the DEAP dataset show (1) that higher level representations perform better than low-level features, and (2) that incorporating motion information leads to a notable performance gain, independently from the chosen representation
- …