3 research outputs found

    Visual planes-based Simultaneous Localization And Model Refinement for augmented reality

    Get PDF
    This paper presents a method for camera pose tracking that uses a partial knowledge about the scene. The method is based on monocular vision Simultaneous Localization And Mapping (SLAM). With respect to classical SLAM implementations, this approach uses previously known information about the environment (rough map of the walls) and profits from the various available databases and blueprints to constraint the problem. This method considers that the tracked image patches belong to known planes (with some uncertainty in their localization) and that SLAM map can be represented by associations of cameras and planes. In this paper, we propose an adapted SLAM implementation and detail the considered models. We show that this method gives good results for a real sequence with complex motion for augmented reality (AR) application.

    Meta Information in Graph-based Simultaneous Localisation And Mapping

    Get PDF
    Establishing the spatial and temporal relationships between a robot, and its environment serves as a basis for scene understanding. The established approach in the literature to simultaneously build a representation of the environment, and spatially and temporally localise the robot within the environment, is Simultaneous Localisation And Mapping (SLAM). SLAM algorithms in general, and in particular visual SLAM--where the primary sensors used are cameras--have gained a great amount of attention in the robotics and computer vision communities over the last few decades due to their wide range of applications. The advances in sensing technologies and image-based learning techniques provide an opportunity to introduce additional understanding of the environment to improve the performance of SLAM algorithms. In this thesis, I utilise meta information in a SLAM framework to achieve a robust and consistent representation of the environment and challenge some of the most limiting assumptions in the literature. I exploit structural information associated with geometric primitives, making use of the significant amount of structure present in real world scenes where SLAM algorithms are normally deployed. In particular, I exploit planarity of a group of points and introduce higher-level information associated with orthogonality and parallelism of planes to achieve structural consistency of the returned map. Separately, I also challenge the static world assumption that severely limits the deployment of autonomous mobile robotic systems in a wide range of important real world applications involving highly dynamic and unstructured environments by utilising the semantic and dynamic information in the scene. Most existing techniques try to simplify the problem by ignoring dynamics, relying on a pre-collected database of objects 3D models, imposing some motion constraints or fail to estimate the full SE(3) motions of objects in the scene which makes it infeasible to deploy these algorithms in real life scenarios of unknown and highly dynamic environments. Exploiting semantic and dynamic information in the environment allows to introduce a model-free object-aware SLAM system that is able to achieve robust moving object tracking, accurate estimation of dynamic objects full SE(3) motion, and extract velocity information of moving objects in the scene, resulting in accurate robot localisation and spatio-temporal map estimation
    corecore