2 research outputs found

    Real-time visual workspace localisation and mapping for a wearable robot

    No full text
    1 Introduction: This demo showcases breakthrough results in the general field real-time simultaneous localisation and mapping (SLAM) using vision and in particular its vital role in enabling a wearable robot to assist its user. It accompanies the full paper by the same authors at ISMAR2003 [1]. In our approach, a wearable active vision system ("wearable robot") is mounted at the shoulder. As the wearer moves around his environment, typically browsing a workspace in which a task must be completed, the robot acquires images continuously and generates a map of natural visual features on-the-fly while estimating its ego-motion. Naturally such real-time camera localisation permits the annotation of the scene with rigidly-registered graphics, but further it permits automatic control of the robot's active camera: for instance, fixation on a particular object can be maintained during extended periods of arbitrary user motion, then shifted at will to another object which has potentially been out of the field of view. This kind of functionality is the key to the understanding or "management" of a workspace which the robot needs to have in order to assist its wearer usefully in tasks. We believe that the techniques and technology developed are of prime importance towards the goal of a fully autonomous wearable assistant and of particular immediate value in scenarios of remote collaboration, where a remote expert is able to annotate, through the robot, the environment the wearer is working in
    corecore