15,391 research outputs found
An original framework for understanding human actions and body language by using deep neural networks
The evolution of both fields of Computer Vision (CV) and Artificial Neural Networks (ANNs) has allowed the development of efficient automatic systems for the analysis of people's behaviour.
By studying hand movements it is possible to recognize gestures, often used by people to communicate information in a non-verbal way.
These gestures can also be used to control or interact with devices without physically touching them. In particular, sign language and semaphoric hand gestures are the two foremost areas of interest due to their importance in Human-Human Communication (HHC) and Human-Computer Interaction (HCI), respectively.
While the processing of body movements play a key role in the action recognition and affective computing fields. The former is essential to understand how people act in an environment, while the latter tries to interpret people's emotions based on their poses and movements;
both are essential tasks in many computer vision applications, including event recognition, and video surveillance.
In this Ph.D. thesis, an original framework for understanding Actions and body language is presented. The framework is composed of three main modules: in the first one, a Long Short Term Memory Recurrent Neural Networks (LSTM-RNNs) based method for the Recognition of Sign Language and Semaphoric Hand Gestures is proposed; the second module presents a solution based on 2D skeleton and two-branch stacked LSTM-RNNs for action recognition in video sequences; finally, in the last module, a solution for basic non-acted emotion recognition by using 3D skeleton and Deep Neural Networks (DNNs) is provided.
The performances of RNN-LSTMs are explored in depth, due to their ability to model the long term contextual information of temporal sequences, making them suitable for analysing body movements.
All the modules were tested by using challenging datasets, well known in the state of the art, showing remarkable results compared to the current literature methods
Hybrid Bayesian Eigenobjects: Combining Linear Subspace and Deep Network Methods for 3D Robot Vision
We introduce Hybrid Bayesian Eigenobjects (HBEOs), a novel representation for
3D objects designed to allow a robot to jointly estimate the pose, class, and
full 3D geometry of a novel object observed from a single viewpoint in a single
practical framework. By combining both linear subspace methods and deep
convolutional prediction, HBEOs efficiently learn nonlinear object
representations without directly regressing into high-dimensional space. HBEOs
also remove the onerous and generally impractical necessity of input data
voxelization prior to inference. We experimentally evaluate the suitability of
HBEOs to the challenging task of joint pose, class, and shape inference on
novel objects and show that, compared to preceding work, HBEOs offer
dramatically improved performance in all three tasks along with several orders
of magnitude faster runtime performance.Comment: To appear in the International Conference on Intelligent Robots
(IROS) - Madrid, 201
- …