668 research outputs found

    Evaluation of Pose Tracking Accuracy in the First and Second Generations of Microsoft Kinect

    Full text link
    Microsoft Kinect camera and its skeletal tracking capabilities have been embraced by many researchers and commercial developers in various applications of real-time human movement analysis. In this paper, we evaluate the accuracy of the human kinematic motion data in the first and second generation of the Kinect system, and compare the results with an optical motion capture system. We collected motion data in 12 exercises for 10 different subjects and from three different viewpoints. We report on the accuracy of the joint localization and bone length estimation of Kinect skeletons in comparison to the motion capture. We also analyze the distribution of the joint localization offsets by fitting a mixture of Gaussian and uniform distribution models to determine the outliers in the Kinect motion data. Our analysis shows that overall Kinect 2 has more robust and more accurate tracking of human pose as compared to Kinect 1.Comment: 10 pages, IEEE International Conference on Healthcare Informatics 2015 (ICHI 2015

    On the Calibration of Active Binocular and RGBD Vision Systems for Dual-Arm Robots

    Get PDF
    This paper describes a camera and hand-eye calibration methodology for integrating an active binocular robot head within a dual-arm robot. For this purpose, we derive the forward kinematic model of our active robot head and describe our methodology for calibrating and integrating our robot head. This rigid calibration provides a closedform hand-to-eye solution. We then present an approach for updating dynamically camera external parameters for optimal 3D reconstruction that are the foundation for robotic tasks such as grasping and manipulating rigid and deformable objects. We show from experimental results that our robot head achieves an overall sub millimetre accuracy of less than 0.3 millimetres while recovering the 3D structure of a scene. In addition, we report a comparative study between current RGBD cameras and our active stereo head within two dual-arm robotic testbeds that demonstrates the accuracy and portability of our proposed methodology

    Automatic ankle angle detection by integrated RGB and depth camera system

    Get PDF
    Depth cameras are developing widely. One of their main virtues is that, based on their data and by applying machine learning algorithms and techniques, it is possible to perform body tracking and make an accurate three-dimensional representation of body movement. Specifically, this paper will use the Kinect v2 device, which incorporates a random forest algorithm for 25 joints detection in the human body. However, although Kinect v2 is a powerful tool, there are circumstances in which the device’s design does not allow the extraction of such data or the accuracy of the data is low, as is usually the case with foot position. We propose a method of acquiring this data in circumstances where the Kinect v2 device does not recognize the body when only the lower limbs are visible, improving the ankle angle’s precision employing projection lines. Using a region-based convolutional neural network (Mask RCNN) for body recognition, raw data extraction for automatic ankle angle measurement has been achieved. All angles have been evaluated by inertial measurement units (IMUs) as gold standard. For the six tests carried out at different fixed distances between 0.5 and 4 m to the Kinect, we have obtained (mean ± SD) a Pearson’s coefficient, r = 0.89 ± 0.04, a Spearman’s coefficient, ρ = 0.83 ± 0.09, a root mean square error, RMSE = 10.7 ± 2.6 deg and a mean absolute error, MAE = 7.5 ± 1.8 deg. For the walking test, or variable distance test, we have obtained a Pearson’s coefficient, r = 0.74, a Spearman’s coefficient, ρ = 0.72, an RMSE = 6.4 deg and an MAE = 4.7 deg.This work has been supported by the Spanish Ministry of Science, Innovation and Universities and European Regional Development Fund (ERDF) across projects RTC-2017-6321-1 AEI/FEDER, UE, PID2019-107270RB-C21 AEI/FEDER, UE and FEDER founds

    Full-body motion-based game interaction for older adults

    Get PDF
    Older adults in nursing homes often lead sedentary lifestyles, which reduces their life expectancy. Full-body motion-control games provide an opportunity for these adults to remain active and engaged; these games are not designed with age-related impairments in mind, which prevents the games from being leveraged to increase the activity levels of older adults. In this paper, we present two studies aimed at developing game design guidelines for full-body motion controls for older adults experiencing age-related changes and impairments. Our studies also demonstrate how full-body motion-control games can accommodate a variety of user abilities, have a positive effect on mood and, by extension, the emotional well-being of older adults. Based on our studies, we present seven guidelines for the design of full-body interaction in games. The guidelines are designed to foster safe physical activity among older adults, thereby increasing their quality of life. Copyright 2012 ACM

    Human Pose Detection for Robotic-Assisted and Rehabilitation Environments

    Get PDF
    Assistance and rehabilitation robotic platforms must have precise sensory systems for human–robot interaction. Therefore, human pose estimation is a current topic of research, especially for the safety of human–robot collaboration and the evaluation of human biomarkers. Within this field of research, the evaluation of the low-cost marker-less human pose estimators of OpenPose and Detectron 2 has received much attention for their diversity of applications, such as surveillance, sports, videogames, and assessment in human motor rehabilitation. This work aimed to evaluate and compare the angles in the elbow and shoulder joints estimated by OpenPose and Detectron 2 during four typical upper-limb rehabilitation exercises: elbow side flexion, elbow flexion, shoulder extension, and shoulder abduction. A setup of two Kinect 2 RGBD cameras was used to obtain the ground truth of the joint and skeleton estimations during the different exercises. Finally, we provided a numerical comparison (RMSE and MAE) among the angle measurements obtained with OpenPose, Detectron 2, and the ground truth. The results showed how OpenPose outperforms Detectron 2 in these types of applications.Óscar G. Hernández holds a grant from the Spanish Fundación Carolina, the University of Alicante, and the National Autonomous University of Honduras
    • 

    corecore