2 research outputs found

    A Visual-Sensor Model for Mobile Robot Localisation

    No full text
    We present a probabilistic sensor model for camera-pose estimation in hallways and cluttered o#ce environments. The model is based on the comparison of features obtained from a given 3D geometrical model of the environment with features present in the camera image. The techniques involved are simpler than state-of-the-art photogrammetric approaches. This allows the model to be used in probabilistic robot localisation methods. Moreover, it is very well suited for sensor fusion. The sensor model has been used with Monte-Carlo localisation to track the position of a mobile robot in a hallway navigation task. Empirical results are presented for this application

    A Visual-Sensor Model for Mobile Robot Localisation

    No full text
    Introduction Due to recent advances in robot hardware, there is a great demand for vision-based robot localisation techniques [DeSouza and Kak, 2002] . We present a probabilistic sensor model for camera-pose estimation in hallways and other known structured environments. Given a 3D geometrical map of the environment, we want to find an approximate measure of the probability that a given camera image has been obtained at a certain place in the robot's operating environment. Our sensor model is based on feature matching techniques that are simpler than state-of-the-art photogrammetric approaches. This allows the model to be used in probabilistic robot localisation methods, such as Monte Carlo localisation (MCL) [Dellaert et al., 1999] . We have combined photogrammetric techniques for feature projection with the flexibility and robustness of MCL. Moreover, our approach is sufficiently fast to allow for sensor fusion. That is, by using distance measurements from sonars and laser in addit
    corecore