The control of compliantly actuated anthropomimetic robots with complex and multiarticular joints, such as those developed within the ECCEROBOT project, is extremely challenging. We are approaching the problem by using a physics engine to run a highly detailed simulation of such a robot’s structure and dynamic behaviour, and then searching for sequences of motor activations that will achieve particular goals. This requires the simulated robot to be situated accurately in a physics-based representation of its environment which includes the object with which it is to interact. In this paper we present our environmental sensing and modelling scheme which uses data from a single headmounted Kinect sensor to provide and locate the environmental model, and to identify and locate the target object accurately in the presence of significant motion blur
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.