937 research outputs found

    Ske2Grid: Skeleton-to-Grid Representation Learning for Action Recognition

    Full text link
    This paper presents Ske2Grid, a new representation learning framework for improved skeleton-based action recognition. In Ske2Grid, we define a regular convolution operation upon a novel grid representation of human skeleton, which is a compact image-like grid patch constructed and learned through three novel designs. Specifically, we propose a graph-node index transform (GIT) to construct a regular grid patch through assigning the nodes in the skeleton graph one by one to the desired grid cells. To ensure that GIT is a bijection and enrich the expressiveness of the grid representation, an up-sampling transform (UPT) is learned to interpolate the skeleton graph nodes for filling the grid patch to the full. To resolve the problem when the one-step UPT is aggressive and further exploit the representation capability of the grid patch with increasing spatial size, a progressive learning strategy (PLS) is proposed which decouples the UPT into multiple steps and aligns them to multiple paired GITs through a compact cascaded design learned progressively. We construct networks upon prevailing graph convolution networks and conduct experiments on six mainstream skeleton-based action recognition datasets. Experiments show that our Ske2Grid significantly outperforms existing GCN-based solutions under different benchmark settings, without bells and whistles. Code and models are available at https://github.com/OSVAI/Ske2GridComment: The paper of Ske2Grid is published at ICML 2023. Code and models are available at https://github.com/OSVAI/Ske2Gri

    Semantic guided multi-future human motion prediction

    Get PDF
    L'obiettivo della tesi è quello di esplorare il possibile utilizzo di un modello basato su reti neurali già sviluppato per la previsione multi-futuro del moto di un agente umano. Data una traiettoria con informazione spaziale (sotto forma di angoli relativi dei giunti) di una struttura semplificata di scheletro umano, si cerca di aumentare l'accuratezza di previsione del modello grazie all'aggiunta di informazione semantica. Per informazione semantica si intende il significato ad alto livello dell'azione che l'agente umano sta compiendo.Investigate the potential utilization of a pre-existing neural network model, originally designed for multi-future prediction of human agent motion in a static camera scene, adapted to forecast rotational trajectories of human joints. By incorporating semantic information, pertaining to the higher-level depiction of the human agent's action, the objective is to enhance the prediction accuracy of the model. The study made use of the AMASS and BABEL datasets to achieve this purpose

    WATCHING PEOPLE: ALGORITHMS TO STUDY HUMAN MOTION AND ACTIVITIES

    Get PDF
    Nowadays human motion analysis is one of the most active research topics in Computer Vision and it is receiving an increasing attention from both the industrial and scientific communities. The growing interest in human motion analysis is motivated by the increasing number of promising applications, ranging from surveillance, human–computer interaction, virtual reality to healthcare, sports, computer games and video conferencing, just to name a few. The aim of this thesis is to give an overview of the various tasks involved in visual motion analysis of the human body and to present the issues and possible solutions related to it. In this thesis, visual motion analysis is categorized into three major areas related to the interpretation of human motion: tracking of human motion using virtual pan-tilt-zoom (vPTZ) camera, recognition of human motions and human behaviors segmentation. In the field of human motion tracking, a virtual environment for PTZ cameras (vPTZ) is presented to overcame the mechanical limitations of PTZ cameras. The vPTZ is built on equirectangular images acquired by 360° cameras and it allows not only the development of pedestrian tracking algorithms but also the comparison of their performances. On the basis of this virtual environment, three novel pedestrian tracking algorithms for 360° cameras were developed, two of which adopt a tracking-by-detection approach while the last adopts a Bayesian approach. The action recognition problem is addressed by an algorithm that represents actions in terms of multinomial distributions of frequent sequential patterns of different length. Frequent sequential patterns are series of data descriptors that occur many times in the data. The proposed method learns a codebook of frequent sequential patterns by means of an apriori-like algorithm. An action is then represented with a Bag-of-Frequent-Sequential-Patterns approach. In the last part of this thesis a methodology to semi-automatically annotate behavioral data given a small set of manually annotated data is presented. The resulting methodology is not only effective in the semi-automated annotation task but can also be used in presence of abnormal behaviors, as demonstrated empirically by testing the system on data collected from children affected by neuro-developmental disorders

    From Dense 2D to Sparse 3D Trajectories for Human Action Detection and Recognition

    Get PDF

    Learning action recognition model from depth and skeleton videos

    Get PDF
    Depth sensors open up possibilities of dealing with the human action recognition problem by providing 3D human skeleton data and depth images of the scene. Analysis of hu- man actions based on 3D skeleton data has become popular recently, due to its robustness and view-invariant represen- tation. However, the skeleton alone is insufficient to distin- guish actions which involve human-object interactions. In this paper, we propose a deep model which efficiently mod- els human-object interactions and intra-class variations un- der viewpoint changes. First, a human body-part model is introduced to transfer the depth appearances of body-parts to a shared view-invariant space. Second, an end-to-end learning framework is proposed which is able to effectively combine the view-invariant body-part representation from skeletal and depth images, and learn the relations between the human body-parts and the environmental objects, the interactions between different human body-parts, and the temporal structure of human actions. We have evaluated the performance of our proposed model against 15 existing techniques on two large benchmark human action recogni- tion datasets including NTU RGB+D and UWA3DII. The Experimental results show that our technique provides a significant improvement over state-of-the-art methods. 1
    • …
    corecore