Towards Constructing and Using Selforganizing Visual Environment Representations for Mobile Robots

Abstract

Due to the upcoming applications in the field of service robotics mobile robots are currently receiving increasing attention in industry and the scientific community. Applications in the area of service robotics demand a high degree of system autonomy, which robots without learning capabilities will not be able to meet. Learning is required in the context of action models and appropriate perception procedures. Both are extremely difficult to acquire especially with high bandwidth sensors (e.g. video cameras) which are needed in the envisioned unstructured worlds. Selflocalization is a basic requirement for mobile robots. This paper therefore proposes a new methodology for image based selflocalization using a selforganized visual representation of the environment. It allows for the seamless integration of active and passive localization

    Similar works