31,841 research outputs found
Modeling Temporal Dynamics and Spatial Configurations of Actions Using Two-Stream Recurrent Neural Networks
Recently, skeleton based action recognition gains more popularity due to
cost-effective depth sensors coupled with real-time skeleton estimation
algorithms. Traditional approaches based on handcrafted features are limited to
represent the complexity of motion patterns. Recent methods that use Recurrent
Neural Networks (RNN) to handle raw skeletons only focus on the contextual
dependency in the temporal domain and neglect the spatial configurations of
articulated skeletons. In this paper, we propose a novel two-stream RNN
architecture to model both temporal dynamics and spatial configurations for
skeleton based action recognition. We explore two different structures for the
temporal stream: stacked RNN and hierarchical RNN. Hierarchical RNN is designed
according to human body kinematics. We also propose two effective methods to
model the spatial structure by converting the spatial graph into a sequence of
joints. To improve generalization of our model, we further exploit 3D
transformation based data augmentation techniques including rotation and
scaling transformation to transform the 3D coordinates of skeletons during
training. Experiments on 3D action recognition benchmark datasets show that our
method brings a considerable improvement for a variety of actions, i.e.,
generic actions, interaction activities and gestures.Comment: Accepted to IEEE International Conference on Computer Vision and
Pattern Recognition (CVPR) 201
An original framework for understanding human actions and body language by using deep neural networks
The evolution of both fields of Computer Vision (CV) and Artificial Neural Networks (ANNs) has allowed the development of efficient automatic systems for the analysis of people's behaviour.
By studying hand movements it is possible to recognize gestures, often used by people to communicate information in a non-verbal way.
These gestures can also be used to control or interact with devices without physically touching them. In particular, sign language and semaphoric hand gestures are the two foremost areas of interest due to their importance in Human-Human Communication (HHC) and Human-Computer Interaction (HCI), respectively.
While the processing of body movements play a key role in the action recognition and affective computing fields. The former is essential to understand how people act in an environment, while the latter tries to interpret people's emotions based on their poses and movements;
both are essential tasks in many computer vision applications, including event recognition, and video surveillance.
In this Ph.D. thesis, an original framework for understanding Actions and body language is presented. The framework is composed of three main modules: in the first one, a Long Short Term Memory Recurrent Neural Networks (LSTM-RNNs) based method for the Recognition of Sign Language and Semaphoric Hand Gestures is proposed; the second module presents a solution based on 2D skeleton and two-branch stacked LSTM-RNNs for action recognition in video sequences; finally, in the last module, a solution for basic non-acted emotion recognition by using 3D skeleton and Deep Neural Networks (DNNs) is provided.
The performances of RNN-LSTMs are explored in depth, due to their ability to model the long term contextual information of temporal sequences, making them suitable for analysing body movements.
All the modules were tested by using challenging datasets, well known in the state of the art, showing remarkable results compared to the current literature methods
Learning 3D Navigation Protocols on Touch Interfaces with Cooperative Multi-Agent Reinforcement Learning
Using touch devices to navigate in virtual 3D environments such as computer
assisted design (CAD) models or geographical information systems (GIS) is
inherently difficult for humans, as the 3D operations have to be performed by
the user on a 2D touch surface. This ill-posed problem is classically solved
with a fixed and handcrafted interaction protocol, which must be learned by the
user. We propose to automatically learn a new interaction protocol allowing to
map a 2D user input to 3D actions in virtual environments using reinforcement
learning (RL). A fundamental problem of RL methods is the vast amount of
interactions often required, which are difficult to come by when humans are
involved. To overcome this limitation, we make use of two collaborative agents.
The first agent models the human by learning to perform the 2D finger
trajectories. The second agent acts as the interaction protocol, interpreting
and translating to 3D operations the 2D finger trajectories from the first
agent. We restrict the learned 2D trajectories to be similar to a training set
of collected human gestures by first performing state representation learning,
prior to reinforcement learning. This state representation learning is
addressed by projecting the gestures into a latent space learned by a
variational auto encoder (VAE).Comment: 17 pages, 8 figures. Accepted at The European Conference on Machine
Learning and Principles and Practice of Knowledge Discovery in Databases 2019
(ECMLPKDD 2019
Real-Time Radar-Based Gesture Detection and Recognition Built in an Edge-Computing Platform
In this paper, a real-time signal processing frame-work based on a 60 GHz
frequency-modulated continuous wave (FMCW) radar system to recognize gestures
is proposed. In order to improve the robustness of the radar-based gesture
recognition system, the proposed framework extracts a comprehensive hand
profile, including range, Doppler, azimuth and elevation, over multiple
measurement-cycles and encodes them into a feature cube. Rather than feeding
the range-Doppler spectrum sequence into a deep convolutional neural network
(CNN) connected with recurrent neural networks, the proposed framework takes
the aforementioned feature cube as input of a shallow CNN for gesture
recognition to reduce the computational complexity. In addition, we develop a
hand activity detection (HAD) algorithm to automatize the detection of gestures
in real-time case. The proposed HAD can capture the time-stamp at which a
gesture finishes and feeds the hand profile of all the relevant
measurement-cycles before this time-stamp into the CNN with low latency. Since
the proposed framework is able to detect and classify gestures at limited
computational cost, it could be deployed in an edge-computing platform for
real-time applications, whose performance is notedly inferior to a
state-of-the-art personal computer. The experimental results show that the
proposed framework has the capability of classifying 12 gestures in real-time
with a high F1-score.Comment: Accepted for publication in IEEE Sensors Journal. A video is
available on https://youtu.be/IR5NnZvZBL
- …