6 research outputs found

    SAR-NAS: Skeleton-based Action Recognition via Neural Architecture Searching

    Full text link
    This paper presents a study of automatic design of neural network architectures for skeleton-based action recognition. Specifically, we encode a skeleton-based action instance into a tensor and carefully define a set of operations to build two types of network cells: normal cells and reduction cells. The recently developed DARTS (Differentiable Architecture Search) is adopted to search for an effective network architecture that is built upon the two types of cells. All operations are 2D based in order to reduce the overall computation and search space. Experiments on the challenging NTU RGB+D and Kinectics datasets have verified that most of the networks developed to date for skeleton-based action recognition are likely not compact and efficient. The proposed method provides an approach to search for such a compact network that is able to achieve comparative or even better performance than the state-of-the-art methods

    Deep Multi-Model Fusion for Human Activity Recognition Using Evolutionary Algorithms

    Get PDF
    Machine recognition of the human activities is an active research area in computer vision. In previous study, either one or two types of modalities have been used to handle this task. However, the grouping of maximum information improves the recognition accuracy of human activities. Therefore, this paper proposes an automatic human activity recognition system through deep fusion of multi-streams along with decision-level score optimization using evolutionary algorithms on RGB, depth maps and 3d skeleton joint information. Our proposed approach works in three phases, 1) space-time activity learning using two 3D Convolutional Neural Network (3DCNN) and a Long Sort Term Memory (LSTM) network from RGB, Depth and skeleton joint positions 2) Training of SVM using the activities learned from previous phase for each model and score generation using trained SVM 3) Score fusion and optimization using two Evolutionary algorithm such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO) algorithm. The proposed approach is validated on two 3D challenging datasets, MSRDailyActivity3D and UTKinectAction3D. Experiments on these two datasets achieved 85.94% and 96.5% accuracies, respectively. The experimental results show the usefulness of the proposed representation. Furthermore, the fusion of different modalities improves recognition accuracies rather than using one or two types of information and obtains the state-of-art results

    Detecci贸n de acciones humanas a partir de informaci贸n de profundidad mediante redes neuronales convolucionales

    Get PDF
    El objetivo principal del presente trabajo es la implementaci贸n de un sistema de detecci贸n de acciones humanas en el 谩mbito de la seguridad y la video-vigilancia a partir de la informaci贸n de profundidad ("Depth") proporcionada por sensores RGB-D. El sistema se basa en el empleo de redes neuronales convolucionales 3D (3D-CNN) que permiten realizar de forma autom谩tica la extracci贸n de caracter铆sticas y clasificaci贸n de acciones a partir de la informaci贸n espacial y temporal de las secuencias de profundidad. La propuesta se ha evaluado de forma exhaustiva, obteniendo como resultados experimentales, una precisi贸n del 94% en la detecci贸n de acciones. Si ten茅is problemas, sugerencias o comentarios sobre el mismo, dirigidlas por favor a Sergio de L贸pez Diz .The main objective of this work is the implementation of human actions detection system in the field of security and video-surveillance from depth information provided by RGB-D sensors. The system is based on 3D convolutional neural networks (3D-CNN) that allow the automatic features extraction and actions classification from spatial and temporal information of depth sequences. The proposal has been exhaustively evaluated, obtaining as experimental results, an accuracy of 94% in the actions detection. If you have problems, suggestions or comments on the document, please forward them to Sergio de L贸pez Diz .Grado en Ingenier铆a Electr贸nica de Comunicacione
    corecore