1 research outputs found

    A Probabilistic Visual Sensor Model for Mobile Robot Localisation in Structured Environments

    No full text
    We present a technique for self-localisation of a mobile robot in structured office environments using a monocular on-board camera only. Most state of the art approaches to map building and localisation in mobile robotics are probabilistic. The majority depends on accurate proximity sensors such as laser range finders or sonar sensors. As an alternative, we have developed a probabilistic sensor model for robot vision. By matching straight-line segments extracted from the camera image with a geometrical model of the environment, it computes a probability that a given image has been obtained at a certain place in the robot's operating environment. The use of straight-line segments as features provides both computational efficiency and robustness with respect to noise and inaccuracies of the map. We have compared the performance of the sensor model with a traditional one for a laser range finder in the common framework of Monte-Carlo localisation. Given the results on robustness and accuracy of position estimation, our localisation technique is applicable for mobile robots in structured indoor environments that do not have laser sensors. Moreover, the model is appropriate for sensor fusion and object recognition
    corecore