Upper body pose estimation utilizing kinematic constraints from physical human-robot interaction

Abstract

In physical Human-Robot Interaction (pHRI), knowing the pose of the operator is beneficial and may allow the robot to better accommodate the human operator. Due to a large redundancy in the human body, determining the pose of the human operator is difficult to achieve in unstructured environments especially in human-robot collaborative operations where the robot often occludes the human from vision-based sensors. This work presents an upper body pose estimation method based on exploiting known positions of the human operator's hands while performing a task with the robot. Upper body pose is estimated using upper limb kinematic models alongside sensor information and model approximations to produce solutions that are biomechanically feasible. The pose estimation method was compared to upper body poses obtained using a motion capture system. It was shown to be able to perform robustly with varying amounts of available information. This approach is well suited in applications where robots are controlled using well-defined interfaces such as handlebars, operating in unstructured environments

    Similar works

    Full text

    thumbnail-image