272,147 research outputs found

    Fast and reliable recognition of human motion from motion trajectories using wavelet analysis

    Get PDF
    Recognition of human motion provides hints to understand human activities and gives opportunities to the development of new human-computer interface. Recent studies, however, are limited to extracting motion history image and recognizing gesture or locomotion of human body parts. Although the approach employed, i.e. the transformation of the 3D space-time (x-y-t) analysis to the 2D image analysis, is faster than analyzing 3D motion feature, it is less accurate and less robust in nature. In this paper, a fast trajectory-classification algorithm for interpreting movement of human body parts using wavelet analysis is proposed to increase the accuracy and robustness of human motion recognition. By tracking human body in real time, the motion trajectory (x-y-t) can be extracted. The motion trajectory is then broken down into wavelets that form a set of wavelet features. Classification based on the wavelet features can then be done to interpret the human motion. An online hand drawing digit recognition system was built using the proposed algorithm. Experiments show that the proposed algorithm is able to recognize digits from human movement accurately in real time.postprintThe 2004 IFIP International Conference on Artificial Intelligence Applications and Innovation, Toulouse, France, 22-27 August 2004. In Proceedings of the IFIP International Conference on Artificial Intelligence Applications and Innovation, 2004, p. 1-1

    Vision-based toddler tracking at home

    Get PDF
    This paper presents a vision-based toddler tracking system for detecting risk factors of a toddler's fall within the home environment. The risk factors have environmental and behavioral aspects and the research in this paper focuses on the behavioral aspects. Apart from common image processing tasks such as background subtraction, the vision-based toddler tracking involves human classification, acquisition of motion and position information, and handling of regional merges and splits. The human classification is based on dynamic motion vectors of the human body. The center of mass of each contour is detected and connected with the closest center of mass in the next frame to obtain position, speed, and directional information. This tracking system is further enhanced by dealing with regional merges and splits due to multiple object occlusions. In order to identify the merges and splits, two directional detections of closest region centers are conducted between every two successive frames. Merges and splits of a single object due to errors in the background subtraction are also handled. The tracking algorithms have been developed, implemented and tested

    A Whole-Body Pose Taxonomy for Loco-Manipulation Tasks

    Full text link
    Exploiting interaction with the environment is a promising and powerful way to enhance stability of humanoid robots and robustness while executing locomotion and manipulation tasks. Recently some works have started to show advances in this direction considering humanoid locomotion with multi-contacts, but to be able to fully develop such abilities in a more autonomous way, we need to first understand and classify the variety of possible poses a humanoid robot can achieve to balance. To this end, we propose the adaptation of a successful idea widely used in the field of robot grasping to the field of humanoid balance with multi-contacts: a whole-body pose taxonomy classifying the set of whole-body robot configurations that use the environment to enhance stability. We have revised criteria of classification used to develop grasping taxonomies, focusing on structuring and simplifying the large number of possible poses the human body can adopt. We propose a taxonomy with 46 poses, containing three main categories, considering number and type of supports as well as possible transitions between poses. The taxonomy induces a classification of motion primitives based on the pose used for support, and a set of rules to store and generate new motions. We present preliminary results that apply known segmentation techniques to motion data from the KIT whole-body motion database. Using motion capture data with multi-contacts, we can identify support poses providing a segmentation that can distinguish between locomotion and manipulation parts of an action.Comment: 8 pages, 7 figures, 1 table with full page figure that appears in landscape page, 2015 IEEE/RSJ International Conference on Intelligent Robots and System

    Silhouette-based gait recognition using Procrustes shape analysis and elliptic Fourier descriptors

    Get PDF
    This paper presents a gait recognition method which combines spatio-temporal motion characteristics, statistical and physical parameters (referred to as STM-SPP) of a human subject for its classification by analysing shape of the subject's silhouette contours using Procrustes shape analysis (PSA) and elliptic Fourier descriptors (EFDs). STM-SPP uses spatio-temporal gait characteristics and physical parameters of human body to resolve similar dissimilarity scores between probe and gallery sequences obtained by PSA. A part-based shape analysis using EFDs is also introduced to achieve robustness against carrying conditions. The classification results by PSA and EFDs are combined, resolving tie in ranking using contour matching based on Hu moments. Experimental results show STM-SPP outperforms several silhouette-based gait recognition methods

    Modeling Spatial Relations of Human Body Parts for Indexing and Retrieving Close Character Interactions

    Get PDF
    Retrieving pre-captured human motion for analyzing and synthesizing virtual character movement have been widely used in Virtual Reality (VR) and interactive computer graphics applications. In this paper, we propose a new human pose representation, called Spatial Relations of Human Body Parts (SRBP), to represent spatial relations between body parts of the subject(s), which intuitively describes how much the body parts are interacting with each other. Since SRBP is computed from the local structure (i.e. multiple body parts in proximity) of the pose instead of the information from individual or pairwise joints as in previous approaches, the new representation is robust to minor variations of individual joint location. Experimental results show that SRBP outperforms the existing skeleton-based motion retrieval and classification approaches on benchmark databases

    A robust and efficient video representation for action recognition

    Get PDF
    This paper introduces a state-of-the-art video representation and applies it to efficient action recognition and detection. We first propose to improve the popular dense trajectory features by explicit camera motion estimation. More specifically, we extract feature point matches between frames using SURF descriptors and dense optical flow. The matches are used to estimate a homography with RANSAC. To improve the robustness of homography estimation, a human detector is employed to remove outlier matches from the human body as human motion is not constrained by the camera. Trajectories consistent with the homography are considered as due to camera motion, and thus removed. We also use the homography to cancel out camera motion from the optical flow. This results in significant improvement on motion-based HOF and MBH descriptors. We further explore the recent Fisher vector as an alternative feature encoding approach to the standard bag-of-words histogram, and consider different ways to include spatial layout information in these encodings. We present a large and varied set of evaluations, considering (i) classification of short basic actions on six datasets, (ii) localization of such actions in feature-length movies, and (iii) large-scale recognition of complex events. We find that our improved trajectory features significantly outperform previous dense trajectories, and that Fisher vectors are superior to bag-of-words encodings for video recognition tasks. In all three tasks, we show substantial improvements over the state-of-the-art results

    Investigation of the optimal sensor location and classifier for human motion classification

    Get PDF
    Human motion monitoring by means of wearable technologies is not uncommon nowadays. This demonstrates the growing awareness of the importance of healthy lifestyle. Human body motion involves the movement of multiple muscles and joints. However, the optimal location of sensor placement on the body to record the motion in daily activities has not been well understood. This study aims to find the best sensor location for this purpose among three locations on the body, that is on the back, shank, or wrist. In addition, this study seeks to find the best classification algorithm for human daily activities. The data recorded at these three locations were analysed using several classification algorithms in both Orange software and MATLAB. The results show that the sensor on the wrist provided the best classification result, thereby suggesting that wrist is the best place on the body to place the sensor for human motion monitoring. With regards to classification algorithm, we found that Neural Network provides the most accurate classification as compared to other algorithms. Future development of wearables should look into integrating classification algorithm in the system, thus the human motion monitoring will provide a richer information and not only limited to number of steps and calories burned

    Combining Appearance and Motion for Human Action Classification in Videos

    Get PDF
    We study the question of activity classification in videos and present a novel approach for recognizing human action categories in videos by combining information from appearance and motion of human body parts. Our approach uses a tracking step which involves Particle Filtering and a local non - parametric clustering step. The motion information is provided by the trajectory of the cluster modes of a local set of particles. The statistical information about the particles of that cluster over a number of frames provides the appearance information. Later we use a “Bag ofWords” model to build one histogram per video sequence from the set of these robust appearance and motion descriptors. These histograms provide us characteristic information which helps us to discriminate among various human actions and thus classify them correctly. We tested our approach on the standard KTH and Weizmann human action datasets and the results were comparable to the state of the art. Additionally our approach is able to distinguish between activities that involve the motion of complete body from those in which only certain body parts move. In other words, our method discriminates well between activities with “gross motion” like running, jogging etc. and “local motion” like waving, boxing etc

    Biomechanical analysis and model development applied to table tennis forehand strokes

    Get PDF
    Table tennis playing involves complex spatial movement of the racket and human body. It takes much effort for the novice players to better mimic expert players. The evaluation of motion patterns during table tennis training, which is usually achieved by coaches, is important for novice trainees to improve faster. However, traditional coaching relies heavily on coaches qualitative observation and subjective evaluation. While past literature shows considerable potential in applying biomechanical analysis and classification for motion pattern assessment to improve novice table tennis players, little published work was found on table tennis biomechanics. To attempt to overcome the problems and fill the gaps, this research aims to quantify the movement of table tennis strokes, to identify the motion pattern differences between experts and novices, and to develop a model for automatic evaluation of the motion quality for an individual. Firstly, a novel method for comprehensive quantification and measurement of the kinematic motion of racket and human body is proposed. In addition, a novel method based on racket centre velocity profile is proposed to segment and normalize the motion data. Secondly, a controlled experiment was conducted to collect motion data of expert and novice players during forehand strokes. Statistical analysis was performed to determine the motion differences between the expert and the novice groups. The experts exhibited significantly different motion patterns with faster racket centre velocity and smaller racket plane angle, different standing posture and joint angular velocity, etc. Lastly, a support vector machine (SVM) classification technique was employed to build a model for motion pattern evaluation. The model development was based on experimental data with different feature selection methods and SVM kernels to achieve the best performance (F1 score) through cross-validated and Nelder-Mead method. Results showed that the SVM classification model exhibited good performance with an average model performance above 90% in distinguishing the stroke motion between expert and novice players. This research helps to better understand the biomechanical mechanisms of table tennis strokes, which will ultimately aid the improvement of novice players. The phase segmentation and normalization methods for table tennis strokes are novel, unambiguous and straightforward to apply. The quantitative comparison identified the comprehensive differences in motion between experts and novice players for racket and human body in continuous phase time, which is a novel contribution. The proposed classification model shows potential in the application of SVM to table tennis biomechanics and can be exploited for automatic coaching
    corecore