19,441 research outputs found

    Object detection and tracking aided SLAM in image sequences for dynamic environment.

    Get PDF
    Object detection in a dynamic environment is important for accurate tracking and mapping in Simultaneous Localization and Mapping (SLAM). Dynamic feature points from people or vehicles are the main cause of unreliable SLAM performance. Previous researchers have used varied techniques to solve this problem, such as semantic segmentation, optical flow, and moving consistency check algorithm. In this proposal, Object Detection and Tracking SLAM (ODTS), we define a weighted grid-based attention model for a feature tracking module to track landmarks and objects. ODTS system tracks landmarks, such as buildings in the background, and objects, such as vehicles, in the foreground. For optimizing performance, a robust self-attention module is integrated. For evaluation, the trajectory of the robot is tracked, and the root mean square error (RMSE) is recorded. Additionally, the number of background and foreground feature points were observed for landmarks and objects. ODTS significantly minimizes the tracking lost problem and produces more accurate maps and tracking of feature points

    RGB-D Inertial Odometry for a Resource-Restricted Robot in Dynamic Environments

    Full text link
    Current simultaneous localization and mapping (SLAM) algorithms perform well in static environments but easily fail in dynamic environments. Recent works introduce deep learning-based semantic information to SLAM systems to reduce the influence of dynamic objects. However, it is still challenging to apply a robust localization in dynamic environments for resource-restricted robots. This paper proposes a real-time RGB-D inertial odometry system for resource-restricted robots in dynamic environments named Dynamic-VINS. Three main threads run in parallel: object detection, feature tracking, and state optimization. The proposed Dynamic-VINS combines object detection and depth information for dynamic feature recognition and achieves performance comparable to semantic segmentation. Dynamic-VINS adopts grid-based feature detection and proposes a fast and efficient method to extract high-quality FAST feature points. IMU is applied to predict motion for feature tracking and moving consistency check. The proposed method is evaluated on both public datasets and real-world applications and shows competitive localization accuracy and robustness in dynamic environments. Yet, to the best of our knowledge, it is the best-performance real-time RGB-D inertial odometry for resource-restricted platforms in dynamic environments for now. The proposed system is open source at: https://github.com/HITSZ-NRSL/Dynamic-VINS.gi

    A featureless approach for object detection and tracking in dynamic environments

    Get PDF
    One of the challenging problems in mobile robotics is mapping a dynamic environment for navigating robots. In order to disambiguate multiple moving obstacles, state-of-art techniques often solve some form of dynamic SLAM (Simultaneous Localization and Mapping) problem. Unfortunately, their higher computational complexity press the need for simpler and more efficient approaches suitable for real-time embedded systems. In this paper, we present a ROS-based efficient algorithm for constructing dynamic maps, which exploits the spatial-temporal locality for detecting and tracking moving objects without relying on prior knowledge of their geometrical features. A two-prong contribution of this work is as follows: first, an efficient scheme for decoding sensory data into an estimated time-varying object boundary that ultimately decides its orientation and trajectory based on the iteratively updated robot Field of View (FoV); second, lower time-complexity of updating the dynamic environment through manipulating spatial-temporal locality available in the object motion profile. Unlike existing approaches, the snapshots of the environment remain constant in the number of moving objects. We validate the efficacy of our algorithm on both V-Rep simulations and real-life experiments with a wide array of dynamic environments. We show that the algorithm accurately detects and tracks objects with a high probability as long as sensor noise is low and the speed of moving objects remains within acceptable limits

    Fusion Framework for Moving-Object Classification

    No full text
    International audiencePerceiving the environment is a fundamental task for Advance Driver Assistant Systems. While simultaneous localization and mapping represents the static part of the environment, detection and tracking of moving objects aims at identifying the dynamic part. Knowing the class of the moving objects surrounding the vehicle is a very useful information to correctly reason, decide and act according to each class of object, e.g. car, truck, pedestrian, bike, etc. Active and passive sensors provide useful information to classify certain kind of objects, but perform poorly for others. In this paper we present a generic fusion framework based on Dempster-Shafer theory to represent and combine evidence from several sources. We apply the proposed method to the problem of moving object classification. The method combines information from several lists of moving objects provided by different sensor-based object detectors. The fusion approach includes uncertainty from the reliability of the sensors and their precision to classify specific types of objects. The proposed approach takes into account the instantaneous information at current time and combines it with fused information from previous times. Several experiments were conducted in highway and urban scenarios using a vehicle demonstrator from the interactIVe European project. The obtained results show improvements in the combined classification compared with individual class hypothesis from the individual detector modules

    DS-SLAM: A Semantic Visual SLAM towards Dynamic Environments

    Full text link
    Simultaneous Localization and Mapping (SLAM) is considered to be a fundamental capability for intelligent mobile robots. Over the past decades, many impressed SLAM systems have been developed and achieved good performance under certain circumstances. However, some problems are still not well solved, for example, how to tackle the moving objects in the dynamic environments, how to make the robots truly understand the surroundings and accomplish advanced tasks. In this paper, a robust semantic visual SLAM towards dynamic environments named DS-SLAM is proposed. Five threads run in parallel in DS-SLAM: tracking, semantic segmentation, local mapping, loop closing, and dense semantic map creation. DS-SLAM combines semantic segmentation network with moving consistency check method to reduce the impact of dynamic objects, and thus the localization accuracy is highly improved in dynamic environments. Meanwhile, a dense semantic octo-tree map is produced, which could be employed for high-level tasks. We conduct experiments both on TUM RGB-D dataset and in the real-world environment. The results demonstrate the absolute trajectory accuracy in DS-SLAM can be improved by one order of magnitude compared with ORB-SLAM2. It is one of the state-of-the-art SLAM systems in high-dynamic environments. Now the code is available at our github: https://github.com/ivipsourcecode/DS-SLAMComment: 7 pages, accepted at the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018). Now the code is available at our github: https://github.com/ivipsourcecode/DS-SLA

    A Unified Framework for Mutual Improvement of SLAM and Semantic Segmentation

    Full text link
    This paper presents a novel framework for simultaneously implementing localization and segmentation, which are two of the most important vision-based tasks for robotics. While the goals and techniques used for them were considered to be different previously, we show that by making use of the intermediate results of the two modules, their performance can be enhanced at the same time. Our framework is able to handle both the instantaneous motion and long-term changes of instances in localization with the help of the segmentation result, which also benefits from the refined 3D pose information. We conduct experiments on various datasets, and prove that our framework works effectively on improving the precision and robustness of the two tasks and outperforms existing localization and segmentation algorithms.Comment: 7 pages, 5 figures.This work has been accepted by ICRA 2019. The demo video can be found at https://youtu.be/Bkt53dAehj

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved
    corecore