5 research outputs found

    Human Activity Recognition in Real-Times Environments using Skeleton Joints

    Get PDF
    In this research work, we proposed a most effective noble approach for Human activity recognition in real-time environments. We recognize several distinct dynamic human activity actions using kinect. A 3D skeleton data is processed from real-time video gesture to sequence of frames and getter skeleton joints (Energy Joints, orientation, rotations of joint angles) from selected setof frames. We are using joint angle and orientations, rotations information from Kinect therefore less computation required. However, after extracting the set of frames we implemented several classification techniques Principal Component Analysis (PCA) with several distance based classifiers and Artificial Neural Network (ANN) respectively with some variants for classify our all different gesture models. However, we conclude that use very less number of frame (10-15%) for train our system efficiently from the entire set of gesture frames. Moreover, after successfully completion of our classification methods we clinch an excellent overall accuracy 94%, 96% and 98% respectively. We finally observe that our proposed system is more useful than comparing to other existing system, therefore our model is best suitable for real-time application such as in video games for player action/gesture recognition

    Human action recognition based on motion capture information using fuzzy convolution neural networks

    No full text
    In this paper, we propose a novel approach for human action recognition based on motion capture (MOCAP) information using a Fuzzy convolutional neural network. The MOCAP tracking information of human joints is used to compute the temporal variation of displacement between joints during the execution of an action. Fuzzy membership functions designed to emphasize the discriminative pose associated with each action are considered for feature extraction. The temporal variation of membership values associated with these fuzzy membership functions is considered as the feature representation for action recognition. A convolutional neural network (CNN) capable of recognizing local patterns in input data is trained to recognize human actions from the local patterns in the feature representation. Experimental evaluation on Berkeley MHAD dataset demonstrates the effectiveness of the proposed approach

    Human action recognition based on motion capture information using fuzzy convolution neural networks

    No full text
    In this paper, we propose a novel approach for human action recognition based on motion capture (MOCAP) information using a Fuzzy convolutional neural network. The MOCAP tracking information of human joints is used to compute the temporal variation of displacement between joints during the execution of an action. Fuzzy membership functions designed to emphasize the discriminative pose associated with each action are considered for feature extraction. The temporal variation of membership values associated with these fuzzy membership functions is considered as the feature representation for action recognition. A convolutional neural network (CNN) capable of recognizing local patterns in input data is trained to recognize human actions from the local patterns in the feature representation. Experimental evaluation on Berkeley MHAD dataset demonstrates the effectiveness of the proposed approach
    corecore