620 research outputs found

    EXAMINATION OF AN APPLICABLE RANGE FOR A MARKERLESS MOTION CAPTURE SYSTEM IN GAIT ANALYSIS

    Get PDF
    This study aimed to verify the measurement accuracy of a markerless motion capture system using OpenPose in comparison with the conventional motion capture system using infrared cameras and to discuss an applicable range for the markerless motion capture system. We verified the accuracy of the system by calculating errors in the fundamental parameters of gait analysis. Those errors were 4.42 ± 2.02° in hip angle, 4.26 ± 2.54° in knee angle, and 5.93 ± 4.31° in ankle angle. The verification test revealed that this markerless motion capture system could be readily available, at least in the case of collecting 2D kinematics in the sagittal plane for gait motion

    DEVELOPMENT AND EVALUATION OF A DEEP LEARNING BASED MARKERLESS MOTION CAPTURE SYSTEM

    Get PDF
    This study presented a deep learning based markerless motion capture workflow and evaluated performance against marker-based motion capture during overground running. Multi-view high speed (200 Hz) image data were collected concurrently with marker-based motion capture (ground-truth data) permitting a direct comparison between methods. Lower limb kinematic data for six participants demonstrated high levels of agreement for lower limb joint angles with average RMSE ranging between 2.5° - 4.4° for hip sagittal and frontal plane motion, and 4.2° - 5.2° for knee and ankle motion. These differences generally fall within the known uncertainties of marker-based motion capture, suggesting that our markerless approach could be used for appropriate biomechanics applications. While there is a need for high quality open-access datasets to further facilitate performance improvements, markerless motion capture technology continues to improve; presenting exciting opportunities for biomechanics researchers and practitioners to capture large amounts of high quality, ecologically valid data both in and out of the laboratory setting.qua

    2D-to-3D facial expression transfer

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Automatically changing the expression and physical features of a face from an input image is a topic that has been traditionally tackled in a 2D domain. In this paper, we bring this problem to 3D and propose a framework that given an input RGB video of a human face under a neutral expression, initially computes his/her 3D shape and then performs a transfer to a new and potentially non-observed expression. For this purpose, we parameterize the rest shape --obtained from standard factorization approaches over the input video-- using a triangular mesh which is further clustered into larger macro-segments. The expression transfer problem is then posed as a direct mapping between this shape and a source shape, such as the blend shapes of an off-the-shelf 3D dataset of human facial expressions. The mapping is resolved to be geometrically consistent between 3D models by requiring points in specific regions to map on semantic equivalent regions. We validate the approach on several synthetic and real examples of input faces that largely differ from the source shapes, yielding very realistic expression transfers even in cases with topology changes, such as a synthetic video sequence of a single-eyed cyclops.Peer ReviewedPostprint (author's final draft

    Visual tracking for sports applications

    Get PDF
    Visual tracking of the human body has attracted increasing attention due to the potential to perform high volume low cost analyses of motions in a wide range of applications, including sports training, rehabilitation and security. In this paper we present the development of a visual tracking module for a system aimed to be used as an autonomous instructional aid for amateur golfers. Postural information is captured visually and fused with information from a golf swing analyser mat and both visual and audio feedback given based on the golfer's mistakes. Results from the visual tracking module are presented

    Kinematic State Estimation using Multiple DGPS/MEMS-IMU Sensors

    Get PDF
    Animals have evolved over billions of years and understanding these complex and intertwined systems have potential to advance the technology in the field of sports science, robotics and more. As such, a gait analysis using Motion Capture (MOCAP) technology is the subject of a number of research and development projects aimed at obtaining quantitative measurements. Existing MOCAP technology has limited the majority of studies to the analysis of the steady-state locomotion in a controlled (indoor) laboratory environment. MOCAP systems such as the optical, non-optical acoustic and non-optical magnetic MOCAP systems require predefined capture volume and controlled environmental conditions whilst the non-optical mechanical MOCAP system impedes the motion of the subject. Although the non-optical inertial MOCAP system allows MOCAP in an outdoor environment, it suffers from measurement noise and drift and lacks global trajectory information. The accuracy of these MOCAP systems are known to decrease during the tracking of the transient locomotion. Quantifying the manoeuvrability of animals in their natural habitat to answer the question “Why are animals so manoeuvrable?” remains a challenge. This research aims to develop an outdoor MOCAP system that will allow tracking of the steady-state as well as the transient locomotion of an animal in its natural habitat outside a controlled laboratory condition. A number of researchers have developed novel MOCAP systems with the same aim of creating an outdoor MOCAP system that is aimed at tracking the motion outside a controlled laboratory (indoor) environment with unlimited capture volume. These novel MOCAP systems are either not validated against the commercial MOCAP systems or do not have comparable sub-millimetre accuracy as the commercial MOCAP systems. The developed DGPS/MEMS-IMU multi-receiver fusion MOCAP system was assessed to have global trajectory accuracy of _0:0394m, relative limb position accuracy of _0:006497m. To conclude the research, several recommendations are made to improve the developed MOCAP system and to prepare for a field-testing with a wild animal from a family of a terrestrial megafauna

    SmartMocap: Joint Estimation of Human and Camera Motion using Uncalibrated RGB Cameras

    Full text link
    Markerless human motion capture (mocap) from multiple RGB cameras is a widely studied problem. Existing methods either need calibrated cameras or calibrate them relative to a static camera, which acts as the reference frame for the mocap system. The calibration step has to be done a priori for every capture session, which is a tedious process, and re-calibration is required whenever cameras are intentionally or accidentally moved. In this paper, we propose a mocap method which uses multiple static and moving extrinsically uncalibrated RGB cameras. The key components of our method are as follows. First, since the cameras and the subject can move freely, we select the ground plane as a common reference to represent both the body and the camera motions unlike existing methods which represent bodies in the camera coordinate. Second, we learn a probability distribution of short human motion sequences (\sim1sec) relative to the ground plane and leverage it to disambiguate between the camera and human motion. Third, we use this distribution as a motion prior in a novel multi-stage optimization approach to fit the SMPL human body model and the camera poses to the human body keypoints on the images. Finally, we show that our method can work on a variety of datasets ranging from aerial cameras to smartphones. It also gives more accurate results compared to the state-of-the-art on the task of monocular human mocap with a static camera. Our code is available for research purposes on https://github.com/robot-perception-group/SmartMocap

    Validation of deep learning-based markerless 3D pose estimation

    Get PDF
    Deep learning-based approaches to markerless 3D pose estimation are being adopted by researchers in psychology and neuroscience at an unprecedented rate. Yet many of these tools remain unvalidated. Here, we report on the validation of one increasingly popular tool (DeepLabCut) against simultaneous measurements obtained from a reference measurement system (Fastrak) with well-known performance characteristics. Our results confirm close (mm range) agreement between the two, indicating that under specific circumstances deep learning-based approaches can match more traditional motion tracking methods. Although more work needs to be done to determine their specific performance characteristics and limitations, this study should help build confidence within the research community using these new tools
    corecore