4 research outputs found

    Towards dense moving object segmentation based robust dense RGB-D SLAM in dynamic scenarios

    Full text link
    © 2014 IEEE. Based on the latest achievements in computer vision and RGB-D SLAM, a practical way for dense moving object segmentation and thus a new framework for robust dense RGB-D SLAM in challenging dynamic scenarios is put forward. As the state-of-the-art method in RGB-D SLAM, dense SLAM is very robust when there are motion blur or featureless regions, while most of those sparse feature-based methods could not handle them. However, it is very susceptible to dynamic elements in the scenarios. To enhance its robustness in dynamic scenarios, we propose to combine dense moving object segmentation with dense SLAM. Since the object segmentation results from the latest available algorithm in computer vision are not satisfactory, we propose some effective measures to improve upon them so that better results can be achieved. After dense segmentation of dynamic objects, dense SLAM can be employed to estimate the camera poses. Quantitative results from the available challenging benchmark dataset have proved the effectiveness of our method

    Motion segmentation based robust RGB-D SLAM

    Full text link
    University of Technology, Sydney. Faculty of Engineering and Information Technology.While research on simultaneous localisation and mapping (SLAM) in static environments can be regarded as a significant success due to intensive work during the last several decades, conducting SLAM, especially vision-based SLAM, in dynamic scenarios is still at its early stage. Although it seems like just one step further, the dynamic elements have brought in many unanticipated challenges, including motion detection, segmentation, tracking and 3D reconstruction of both the static environments and the moving objects, in addition to the handling of motion blur. Solely based on RGB-D data with no prior knowledge available, this work centres upon proposing new practical solution frameworks for conducting SLAM in dynamic environments with efficient and robust motion segmentation methods serving as the basis. After a detailed review of the related achievements for SLAM in static environments as well as dynamic ones, and an analysis of the unaddressed challenges, four different motion segmentation methods, which include two 2-view sparse feature based motion segmentation algorithms, a 2-view semi-dense motion segmentation algorithm and an extended n-view dense moving object segmentation algorithm, are firstly proposed and their advantages, disadvantages and feasibility for different practical SLAM application scenarios are evaluated. Based on the proposed motion segmentation methods, two kinds of solution frameworks for performing SLAM in dynamic scenarios are then put forward: the first one is formulated by integrating our sparse feature based motion segmentation techniques with the available pose-graph SLAM framework; and the other one is built upon dense moving object segmentation and tailored for dense SLAM. Related simulation and experimental results have demonstrated the effectiveness of our approaches

    DS-SLAM: A Semantic Visual SLAM towards Dynamic Environments

    Full text link
    Simultaneous Localization and Mapping (SLAM) is considered to be a fundamental capability for intelligent mobile robots. Over the past decades, many impressed SLAM systems have been developed and achieved good performance under certain circumstances. However, some problems are still not well solved, for example, how to tackle the moving objects in the dynamic environments, how to make the robots truly understand the surroundings and accomplish advanced tasks. In this paper, a robust semantic visual SLAM towards dynamic environments named DS-SLAM is proposed. Five threads run in parallel in DS-SLAM: tracking, semantic segmentation, local mapping, loop closing, and dense semantic map creation. DS-SLAM combines semantic segmentation network with moving consistency check method to reduce the impact of dynamic objects, and thus the localization accuracy is highly improved in dynamic environments. Meanwhile, a dense semantic octo-tree map is produced, which could be employed for high-level tasks. We conduct experiments both on TUM RGB-D dataset and in the real-world environment. The results demonstrate the absolute trajectory accuracy in DS-SLAM can be improved by one order of magnitude compared with ORB-SLAM2. It is one of the state-of-the-art SLAM systems in high-dynamic environments. Now the code is available at our github: https://github.com/ivipsourcecode/DS-SLAMComment: 7 pages, accepted at the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018). Now the code is available at our github: https://github.com/ivipsourcecode/DS-SLA

    Visual SLAM based on dynamic object removal

    Get PDF
    Visual simultaneous localization and mapping (SLAM) is the core of intelligent robot navigation system. Many traditional SLAM algorithms assume that the scene is static. When a dynamic object appears in the environment, the accuracy of visual SLAM can degrade due to the interference of dynamic features of moving objects. This strong hypothesis limits the SLAM applications for service robot or driverless car in the real dynamic environment. In this paper, a dynamic object removal algorithm that combines object recognition and optical flow techniques is proposed in the visual SLAM framework for dynamic scenes. The experimental results show that our new method can detect moving object effectively and improve the SLAM performance compared to the state of the art methods
    corecore