4,775 research outputs found
Fully-Coupled Two-Stream Spatiotemporal Networks for Extremely Low Resolution Action Recognition
A major emerging challenge is how to protect people's privacy as cameras and
computer vision are increasingly integrated into our daily lives, including in
smart devices inside homes. A potential solution is to capture and record just
the minimum amount of information needed to perform a task of interest. In this
paper, we propose a fully-coupled two-stream spatiotemporal architecture for
reliable human action recognition on extremely low resolution (e.g., 12x16
pixel) videos. We provide an efficient method to extract spatial and temporal
features and to aggregate them into a robust feature representation for an
entire action video sequence. We also consider how to incorporate high
resolution videos during training in order to build better low resolution
action recognition models. We evaluate on two publicly-available datasets,
showing significant improvements over the state-of-the-art.Comment: 9 pagers, 5 figures, published in WACV 201
Learning to See through a Few Pixels: Multi Streams Network for Extreme Low-Resolution Action Recognition
Human action recognition is one of the most pressing questions in societal emergencies of any kind. Technology is helping to solve such problems at the cost of stealing human privacy. Several approaches have considered the relevance of privacy in the pervasive process of observing people. New algorithms have been proposed to deal with low-resolution images hiding people identity. However, many of these methods do not consider that social security asks for real-time solutions: active cameras require flexible distributed systems in sensible areas as airports, hospitals, stations, squares and roads. To conjugate both human privacy and real-time supervision, we propose a novel deep architecture, the Multi Streams Network. This model works in real-time and performs action recognition on extremely low-resolution videos, exploiting three sources of information: RGB images, optical flow and slack mask data. Experiments on two datasets show that our architecture improves the recognition accuracy compared to the two-streams approach and ensure real-time execution on Edge TPU (Tensor Processing Unit)
Collaborative Spatio-temporal Feature Learning for Video Action Recognition
Spatio-temporal feature learning is of central importance for action
recognition in videos. Existing deep neural network models either learn spatial
and temporal features independently (C2D) or jointly with unconstrained
parameters (C3D). In this paper, we propose a novel neural operation which
encodes spatio-temporal features collaboratively by imposing a weight-sharing
constraint on the learnable parameters. In particular, we perform 2D
convolution along three orthogonal views of volumetric video data,which learns
spatial appearance and temporal motion cues respectively. By sharing the
convolution kernels of different views, spatial and temporal features are
collaboratively learned and thus benefit from each other. The complementary
features are subsequently fused by a weighted summation whose coefficients are
learned end-to-end. Our approach achieves state-of-the-art performance on
large-scale benchmarks and won the 1st place in the Moments in Time Challenge
2018. Moreover, based on the learned coefficients of different views, we are
able to quantify the contributions of spatial and temporal features. This
analysis sheds light on interpretability of the model and may also guide the
future design of algorithm for video recognition.Comment: CVPR 201
Towards Automatic Speech Identification from Vocal Tract Shape Dynamics in Real-time MRI
Vocal tract configurations play a vital role in generating distinguishable
speech sounds, by modulating the airflow and creating different resonant
cavities in speech production. They contain abundant information that can be
utilized to better understand the underlying speech production mechanism. As a
step towards automatic mapping of vocal tract shape geometry to acoustics, this
paper employs effective video action recognition techniques, like Long-term
Recurrent Convolutional Networks (LRCN) models, to identify different
vowel-consonant-vowel (VCV) sequences from dynamic shaping of the vocal tract.
Such a model typically combines a CNN based deep hierarchical visual feature
extractor with Recurrent Networks, that ideally makes the network
spatio-temporally deep enough to learn the sequential dynamics of a short video
clip for video classification tasks. We use a database consisting of 2D
real-time MRI of vocal tract shaping during VCV utterances by 17 speakers. The
comparative performances of this class of algorithms under various parameter
settings and for various classification tasks are discussed. Interestingly, the
results show a marked difference in the model performance in the context of
speech classification with respect to generic sequence or video classification
tasks.Comment: To appear in the INTERSPEECH 2018 Proceeding
Dual-stream spatiotemporal networks with feature sharing for monitoring animals in the home cage
This paper presents a spatiotemporal deep learning approach for mouse
behavioural classification in the home-cage. Using a series of dual-stream
architectures with assorted modifications to increase performance, we introduce
a novel feature sharing approach that jointly processes the streams at regular
intervals throughout the network. To investigate the efficacy of this approach,
models were evaluated by dissociating the streams and training/testing in the
same rigorous manner as the main classifiers. Using an annotated, publicly
available dataset of a singly-housed mice, we achieve prediction accuracy of
86.47% using an ensemble of a Inception-based network and an attention-based
network, both of which utilize this feature sharing. We also demonstrate
through ablation studies that for all models, the feature-sharing architectures
consistently perform better than conventional ones having separate streams. The
best performing models were further evaluated on other activity datasets, both
mouse and human. Future work will investigate the effectiveness of feature
sharing to behavioural classification in the unsupervised anomaly detection
domain
Action recognition using single-pixel time-of-flight detection
Action recognition is a challenging task that plays an important role in many robotic systems, which highly depend on visual input feeds. However, due to privacy concerns, it is important to find a method which can recognise actions without using visual feed. In this paper, we propose a concept for detecting actions while preserving the test subject's privacy. Our proposed method relies only on recording the temporal evolution of light pulses scattered back from the scene. Such data trace to record one action contains a sequence of one-dimensional arrays of voltage values acquired by a single-pixel detector at 1 GHz repetition rate. Information about both the distance to the object and its shape are embedded in the traces. We apply machine learning in the form of recurrent neural networks for data analysis and demonstrate successful action recognition. The experimental results show that our proposed method could achieve on average 96.47 % accuracy on the actions walking forward, walking backwards, sitting down, standing up and waving hand, using recurrent neural network
- …