Planning humanlike actions in blending spaces

Abstract

Abstract—We introduce an approach for enabling sampling-based planners to compute motions with humanlike appearance. The proposed method is based on a space of blendable example motions collected by motion capture. This space is explored by a sampling-based planner that is able to produce motions around obstacles while keeping solutions similar to the original examples. The results therefore largely maintain the humanlike characteristics observed in the example motions. The method is applied to generic upper-body actions and is complemented by a locomotion planner that searches for suitable body placements for executing upper-body actions successfully. As a result, our overall multi-modal planning method is able to automatically coordinate whole-body motions for action execution among obstacles, and the produced motions remain similar to example motions given as input to the system. I

    Similar works

    Full text

    thumbnail-image

    Available Versions