706 research outputs found

    GANerated Hands for Real-time 3D Hand Tracking from Monocular RGB

    Full text link
    We address the highly challenging problem of real-time 3D hand tracking based on a monocular RGB-only sequence. Our tracking method combines a convolutional neural network with a kinematic 3D hand model, such that it generalizes well to unseen data, is robust to occlusions and varying camera viewpoints, and leads to anatomically plausible as well as temporally smooth hand motions. For training our CNN we propose a novel approach for the synthetic generation of training data that is based on a geometrically consistent image-to-image translation network. To be more specific, we use a neural network that translates synthetic images to "real" images, such that the so-generated images follow the same statistical distribution as real-world hand images. For training this translation network we combine an adversarial loss and a cycle-consistency loss with a geometric consistency loss in order to preserve geometric properties (such as hand pose) during translation. We demonstrate that our hand tracking system outperforms the current state-of-the-art on challenging RGB-only footage

    Cognitive Robotics in Industrial Environments

    Get PDF

    Evaluation of Pose Tracking Accuracy in the First and Second Generations of Microsoft Kinect

    Full text link
    Microsoft Kinect camera and its skeletal tracking capabilities have been embraced by many researchers and commercial developers in various applications of real-time human movement analysis. In this paper, we evaluate the accuracy of the human kinematic motion data in the first and second generation of the Kinect system, and compare the results with an optical motion capture system. We collected motion data in 12 exercises for 10 different subjects and from three different viewpoints. We report on the accuracy of the joint localization and bone length estimation of Kinect skeletons in comparison to the motion capture. We also analyze the distribution of the joint localization offsets by fitting a mixture of Gaussian and uniform distribution models to determine the outliers in the Kinect motion data. Our analysis shows that overall Kinect 2 has more robust and more accurate tracking of human pose as compared to Kinect 1.Comment: 10 pages, IEEE International Conference on Healthcare Informatics 2015 (ICHI 2015

    Non-contact measures to monitor hand movement of people with rheumatoid arthritis using a monocular RGB camera

    Get PDF
    Hand movements play an essential role in a person’s ability to interact with the environment. In hand biomechanics, the range of joint motion is a crucial metric to quantify changes due to degenerative pathologies, such as rheumatoid arthritis (RA). RA is a chronic condition where the immune system mistakenly attacks the joints, particularly those in the hands. Optoelectronic motion capture systems are gold-standard tools to quantify changes but are challenging to adopt outside laboratory settings. Deep learning executed on standard video data can capture RA participants in their natural environments, potentially supporting objectivity in remote consultation. The three main research aims in this thesis were 1) to assess the extent to which current deep learning architectures, which have been validated for quantifying motion of other body segments, can be applied to hand kinematics using monocular RGB cameras, 2) to localise where in videos the hand motions of interest are to be found, 3) to assess the validity of 1) and 2) to determine disease status in RA. First, hand kinematics for twelve healthy participants, captured with OpenPose were benchmarked against those captured using an optoelectronic system, showing acceptable instrument errors below 10°. Then, a gesture classifier was tested to segment video recordings of twenty-two healthy participants, achieving an accuracy of 93.5%. Finally, OpenPose and the classifier were applied to videos of RA participants performing hand exercises to determine disease status. The inferred disease activity exhibited agreement with the in-person ground truth in nine out of ten instances, outperforming virtual consultations, which agreed only six times out of ten. These results demonstrate that this approach is more effective than estimated disease activity performed by human experts during video consultations. The end goal sets the foundation for a tool that RA participants can use to observe their disease activity from their home.Open Acces

    Technological advancements in the analysis of human motion and posture management through digital devices

    Get PDF
    Technological development of motion and posture analyses is rapidly progressing, especially in rehabilitation settings and sport biomechanics. Consequently, clear discrimination among different measurement systems is required to diversify their use as needed. This review aims to resume the currently used motion and posture analysis systems, clarify and suggest the appropriate approaches suitable for specific cases or contexts. The currently gold standard systems of motion analysis, widely used in clinical settings, present several limitations related to marker placement or long procedure time. Fully automated and markerless systems are overcoming these drawbacks for conducting biomechanical studies, especially outside laboratories. Similarly, new posture analysis techniques are emerging, often driven by the need for fast and non-invasive methods to obtain high-precision results. These new technologies have also become effective for children or adolescents with non-specific back pain and postural insufficiencies. The evolutions of these methods aim to standardize measurements and provide manageable tools in clinical practice for the early diagnosis of musculoskeletal pathologies and to monitor daily improvements of each patient. Herein, these devices and their uses are described, providing researchers, clinicians, orthopedics, physical therapists, and sports coaches an effective guide to use new technologies in their practice as instruments of diagnosis, therapy, and prevention

    Multiview 3D markerless human pose estimation from OpenPose skeletons

    Get PDF
    Despite the fact that marker-based systems for human motion estimation provide very accurate tracking of the human body joints (at mm precision), these systems are often intrusive or even impossible to use depending on the circumstances, e.g.~markers cannot be put on an athlete during competition. Instrumenting an athlete with the appropriate number of markers requires a lot of time and these markers may fall off during the analysis, which leads to incomplete data and requires new data capturing sessions and hence a waste of time and effort. Therefore, we present a novel multiview video-based markerless system that uses 2D joint detections per view (from OpenPose) to estimate their corresponding 3D positions while tackling the people association problem in the process to allow the tracking of multiple persons at the same time. Our proposed system can perform the tracking in real-time at 20-25 fps. Our results show a standard deviation between 9.6 and 23.7 mm for the lower body joints based on the raw measurements only. After filtering the data, the standard deviation drops to a range between 6.6 and 21.3 mm. Our proposed solution can be applied to a large number of applications, ranging from sports analysis to virtual classrooms where submillimeter precision is not necessarily required, but where the use of markers is impractical

    Predictors of Peak Elbow Valgus Torque in Collegiate Baseball Pitchers

    Get PDF
    Context: The incidence of UCL tears in baseball is at an all-time high. ATs are in the position to identify those at risk and potentially prevent injury to the UCL. In baseball, research has associated elbow valgus torque as a potential predictor of injury risk. However, markerless analysis has not assessed possible predictors of injury to the UCL in baseball players. Objective: Identify the kinematic factors that influence peak elbow valgus torque through the sequence of a fastball pitch. Design: Cross-sectional study. Setting: Field study performed in university’s pitching development center. Participants: Division 1 collegiate baseball pitchers (N=21; 17RHP, 4LHP; 20 ± 1y, 190 ± 4cm, 98 ± 7kg). Main Outcome Measure(s): Using KinaTrax® markerless motion capture, Division 1 college baseball pitchers’ kinematics and kinetics of a single fastball pitch were analyzed. Peak elbow valgus torque, maximum glenohumeral external rotation, maximum glenohumeral internal rotation, glenohumeral angular velocity at ball release, elbow flexion at stride foot contact, and maximum hip shoulder separation. Results: Average velocity of pitches analyzed was 89.4 ± 3.6mph. Peak elbow valgus torque was 137.2 ± 26.2Nm. Maximum glenohumeral external rotation explain 42.4% variance in peak elbow valgus torque (r= -.651; P=.001). No other variables were significantly correlated with peak elbow valgus torque (P\u3e.949). Conclusions: Maximum glenohumeral external rotation during the pitching motion influences peak elbow valgus torque. The negative correlation, as described, is a result of the unique coordinate conventions used by KinaTrax®. Previous literature has identified that increases in maximum glenohumeral external rotation concomitantly increases fastball velocity. Because this variable can influence both velocity and elbow stress, further research is necessary to identify norms that allow for safe and effective play

    Fast and Robust Hand Tracking Using Detection-Guided Optimization

    No full text
    Markerless tracking of hands and fingers is a promising enabler for human-computer interaction. However, adoption has been limited because of tracking inaccuracies, incomplete coverage of motions, low framerate, complex camera setups, and high computational requirements. In this paper, we present a fast method for accurately tracking rapid and complex articulations of the hand using a single depth camera. Our algorithm uses a novel detection-guided optimization strategy that increases the robustness and speed of pose estimation. In the detection step, a randomized decision forest classifies pixels into parts of the hand. In the optimization step, a novel objective function combines the detected part labels and a Gaussian mixture representation of the depth to estimate a pose that best fits the depth. Our approach needs comparably less computational resources which makes it extremely fast (50 fps without GPU support). The approach also supports varying static, or moving, camera-to-scene arrangements. We show the benefits of our method by evaluating on public datasets and comparing against previous work
    corecore