60 research outputs found

    A survey on real-time 3D scene reconstruction with SLAM methods in embedded systems

    Full text link
    The 3D reconstruction of simultaneous localization and mapping (SLAM) is an important topic in the field for transport systems such as drones, service robots and mobile AR/VR devices. Compared to a point cloud representation, the 3D reconstruction based on meshes and voxels is particularly useful for high-level functions, like obstacle avoidance or interaction with the physical environment. This article reviews the implementation of a visual-based 3D scene reconstruction pipeline on resource-constrained hardware platforms. Real-time performances, memory management and low power consumption are critical for embedded systems. A conventional SLAM pipeline from sensors to 3D reconstruction is described, including the potential use of deep learning. The implementation of advanced functions with limited resources is detailed. Recent systems propose the embedded implementation of 3D reconstruction methods with different granularities. The trade-off between required accuracy and resource consumption for real-time localization and reconstruction is one of the open research questions identified and discussed in this paper

    HeteroFusion: Dense scene reconstruction integrating multi-sensors

    Get PDF
    We present a real-time approach that integrates multiple sensors for dense reconstruction of 3D indoor scenes. Existing algorithms are mainly based on a single RGBD camera and require continuous scanning on areas with sufficient geometric details. Failing to do so can lead to tracking loss due to the lack of frame registration hints. Inspired by the fact that utilizing multiple sensors can combine their strengths to form a more robust and accurate implementation, we incorporate multiple types of sensors, which are prevalently equipped in modern robots, including a 2D range sensor, an IMU, and wheel encoders to reinforce the tracking process and obtain better mesh construction. Specifically, we develop a feasible 2D TSDF volume representation for integrating and ray-casting laser frames, leading to a unified cost function in the pose estimation stage. Besides, for validating these estimated poses in the loop-closure optimization process, we train a classifier according to those features extracted from heterogeneous sensors and the registration progress. To evaluate our method on challenging robotic scanning scenarios, we assembled a scanning platform for acquiring real-world scans. We further simulated synthetic scans based on high-fidelity synthetic scenes for quantitative evaluation. Extensive experimental results demonstrate that our system is capable of robustly acquiring dense reconstructions and outperforms state-of-the-art systems

    A LiDAR-Inertial SLAM Tightly-Coupled with Dropout-Tolerant GNSS Fusion for Autonomous Mine Service Vehicles

    Full text link
    Multi-modal sensor integration has become a crucial prerequisite for the real-world navigation systems. Recent studies have reported successful deployment of such system in many fields. However, it is still challenging for navigation tasks in mine scenes due to satellite signal dropouts, degraded perception, and observation degeneracy. To solve this problem, we propose a LiDAR-inertial odometry method in this paper, utilizing both Kalman filter and graph optimization. The front-end consists of multiple parallel running LiDAR-inertial odometries, where the laser points, IMU, and wheel odometer information are tightly fused in an error-state Kalman filter. Instead of the commonly used feature points, we employ surface elements for registration. The back-end construct a pose graph and jointly optimize the pose estimation results from inertial, LiDAR odometry, and global navigation satellite system (GNSS). Since the vehicle has a long operation time inside the tunnel, the largely accumulated drift may be not fully by the GNSS measurements. We hereby leverage a loop closure based re-initialization process to achieve full alignment. In addition, the system robustness is improved through handling data loss, stream consistency, and estimation error. The experimental results show that our system has a good tolerance to the long-period degeneracy with the cooperation different LiDARs and surfel registration, achieving meter-level accuracy even for tens of minutes running during GNSS dropouts

    C-blox: A Scalable and Consistent TSDF-based Dense Mapping Approach

    Full text link
    In many applications, maintaining a consistent dense map of the environment is key to enabling robotic platforms to perform higher level decision making. Several works have addressed the challenge of creating precise dense 3D maps from visual sensors providing depth information. However, during operation over longer missions, reconstructions can easily become inconsistent due to accumulated camera tracking error and delayed loop closure. Without explicitly addressing the problem of map consistency, recovery from such distortions tends to be difficult. We present a novel system for dense 3D mapping which addresses the challenge of building consistent maps while dealing with scalability. Central to our approach is the representation of the environment as a collection of overlapping TSDF subvolumes. These subvolumes are localized through feature-based camera tracking and bundle adjustment. Our main contribution is a pipeline for identifying stable regions in the map, and to fuse the contributing subvolumes. This approach allows us to reduce map growth while still maintaining consistency. We demonstrate the proposed system on a publicly available dataset and simulation engine, and demonstrate the efficacy of the proposed approach for building consistent and scalable maps. Finally we demonstrate our approach running in real-time on-board a lightweight MAV.Comment: 8 pages, 5 figures, conferenc

    The simultaneous localization and mapping (SLAM):An overview

    Get PDF
    Positioning is a need for many applications related to mapping and navigation either in civilian or military domains. The significant developments in satellite-based techniques, sensors, telecommunications, computer hardware and software, image processing, etc. positively influenced to solve the positioning problem efficiently and instantaneously. Accordingly, the mentioned development empowered the applications and advancement of autonomous navigation. One of the most interesting developed positioning techniques is what is called in robotics as the Simultaneous Localization and Mapping SLAM. The SLAM problem solution has witnessed a quick improvement in the last decades either using active sensors like the RAdio Detection And Ranging (Radar) and Light Detection and Ranging (LiDAR) or passive sensors like cameras. Definitely, positioning and mapping is one of the main tasks for Geomatics engineers, and therefore it's of high importance for them to understand the SLAM topic which is not easy because of the huge documentation and algorithms available and the various SLAM solutions in terms of the mathematical models, complexity, the sensors used, and the type of applications. In this paper, a clear and simplified explanation is introduced about SLAM from a Geomatical viewpoint avoiding going into the complicated algorithmic details behind the presented techniques. In this way, a general overview of SLAM is presented showing the relationship between its different components and stages like the core part of the front-end and back-end and their relation to the SLAM paradigm. Furthermore, we explain the major mathematical techniques of filtering and pose graph optimization either using visual or LiDAR SLAM and introduce a summary of the deep learning efficient contribution to the SLAM problem. Finally, we address examples of some existing practical applications of SLAM in our reality

    Visual Perception For Robotic Spatial Understanding

    Get PDF
    Humans understand the world through vision without much effort. We perceive the structure, objects, and people in the environment and pay little direct attention to most of it, until it becomes useful. Intelligent systems, especially mobile robots, have no such biologically engineered vision mechanism to take for granted. In contrast, we must devise algorithmic methods of taking raw sensor data and converting it to something useful very quickly. Vision is such a necessary part of building a robot or any intelligent system that is meant to interact with the world that it is somewhat surprising we don\u27t have off-the-shelf libraries for this capability. Why is this? The simple answer is that the problem is extremely difficult. There has been progress, but the current state of the art is impressive and depressing at the same time. We now have neural networks that can recognize many objects in 2D images, in some cases performing better than a human. Some algorithms can also provide bounding boxes or pixel-level masks to localize the object. We have visual odometry and mapping algorithms that can build reasonably detailed maps over long distances with the right hardware and conditions. On the other hand, we have robots with many sensors and no efficient way to compute their relative extrinsic poses for integrating the data in a single frame. The same networks that produce good object segmentations and labels in a controlled benchmark still miss obvious objects in the real world and have no mechanism for learning on the fly while the robot is exploring. Finally, while we can detect pose for very specific objects, we don\u27t yet have a mechanism that detects pose that generalizes well over categories or that can describe new objects efficiently. We contribute algorithms in four of the areas mentioned above. First, we describe a practical and effective system for calibrating many sensors on a robot with up to 3 different modalities. Second, we present our approach to visual odometry and mapping that exploits the unique capabilities of RGB-D sensors to efficiently build detailed representations of an environment. Third, we describe a 3-D over-segmentation technique that utilizes the models and ego-motion output in the previous step to generate temporally consistent segmentations with camera motion. Finally, we develop a synthesized dataset of chair objects with part labels and investigate the influence of parts on RGB-D based object pose recognition using a novel network architecture we call PartNet

    Robust dense visual SLAM using sensor fusion and motion segmentation

    Get PDF
    Visual simultaneous localisation and mapping (SLAM) is an important technique for enabling mobile robots to navigate autonomously within their environments. Using cameras, robots reconstruct a representation of their environment and simultaneously localise themselves within it. A dense visual SLAM system produces a high-resolution and detailed reconstruction of the environment which can be used for obstacle avoidance or semantic reasoning. State-of-the-art dense visual SLAM systems demonstrate robust performance and impressive accuracy in ideal conditions. However, these techniques are based on requirements which limit the extent to which they can be deployed in real applications. Fundamentally, they require constant scene illumination, smooth camera motion and no moving objects being present in the scene. Overcoming these requirements is not trivial and significant effort is needed to make dense visual SLAM approaches more robust to real-world conditions. The objective of this thesis is to develop dense visual SLAM systems which are more robust to real-world visually challenging conditions. For this, we leverage sensor fusion and motion segmentation for situations where camera data is unsuitable. The first contribution is a visual SLAM system for the NASA Valkyrie humanoid robot which is robust to the robot’s operation. It is based on a sensor fusion approach which combines visual SLAM and leg odometry to demonstrate increased robustness to illumination changes and fast camera motion. Second, we research methods for robust visual odometry in the presence of moving objects. We propose a formulation for joint visual odometry and motion segmentation that demonstrates increased robustness in scenes with moving objects compared to state-of-the-art approaches. We then extend this method using inertial information from a gyroscope to compare the contributions of motion segmentation and motion prior integration for robustness to scene dynamics. As part of this study we provide a dataset recorded in scenes with different numbers of moving objects. In conclusion, we find that both motion segmentation and motion prior integration are necessary for achieving significantly better results in real-world conditions. While motion priors increase robustness, motion segmentation increases the accuracy of the reconstruction results through filtering of moving objects.Edinburgh Centre for RoboticsEngineering and Physical Sciences Research Council (EPSRC

    Dense Visual Simultaneous Localisation and Mapping in Collaborative and Outdoor Scenarios

    Get PDF
    Dense visual simultaneous localisation and mapping (SLAM) systems can produce 3D reconstructions that are digital facsimiles of the physical space they describe. Systems that can produce dense maps with this level of fidelity in real time provide foundational spatial reasoning capabilities for many downstream tasks in autonomous robotics. Over the past 15 years, mapping small scale, indoor environments, such as desks and buildings, with a single slow moving, hand-held sensor has been one of the central focuses of dense visual SLAM research. However, most dense visual SLAM systems exhibit a number of limitations which mean they cannot be directly applied in collaborative or outdoors settings. The contribution of this thesis is to address these limitations with the development of new systems and algorithms for collaborative dense mapping, efficient dense alternation and outdoors operation with fast camera motion and wide field of view (FOV) cameras. We use ElasticFusion, a state-of-the-art dense SLAM system, as our starting point where each of these contributions is implemented as a novel extension to the system. We first present a collaborative dense SLAM system that allows a number of cameras starting with unknown initial relative positions to maintain local maps with the original ElasticFusion algorithm. Visual place recognition across local maps results in constraints that allow maps to be aligned into a common global reference frame, facilitating collaborative mapping and tracking of multiple cameras within a shared map. Within dense alternation based SLAM systems, the standard approach is to fuse every frame into the dense model without considering whether the information contained within the frame is already captured by the dense map and therefore redundant. As the number of cameras or the scale of the map increases, this approach becomes inefficient. In our second contribution, we address this inefficiency by introducing a novel information theoretic approach to keyframe selection that allows the system to avoid processing redundant information. We implement the procedure within ElasticFusion, demonstrating a marked reduction in the number of frames required by the system to estimate an accurate, denoised surface reconstruction. Before dense SLAM techniques can be applied in outdoor scenarios we must first address their reliance on active depth cameras, and their lack of suitability to fast camera motion. In our third contribution we present an outdoor dense SLAM system. The system overcomes the need for an active sensor by employing neural network-based depth inference to predict the geometry of the scene as it appears in each image. To address the issue of camera tracking during fast motion we employ a hybrid architecture, combining elements of both dense and sparse SLAM systems to perform camera tracking and to achieve globally consistent dense mapping. Automotive applications present a particularly important setting for dense visual SLAM systems. Such applications are characterised by their use of wide FOV cameras and are therefore not accurately modelled by the standard pinhole camera model. The fourth contribution of this thesis is to extend the above hybrid sparse-dense monocular SLAM system to cater for large FOV fisheye imagery. This is achieved by reformulating the mapping pipeline in terms of the Kannala-Brandt fisheye camera model. To estimate depth, we introduce a new version of the PackNet depth estimation neural network (Guizilini et al., 2020) adapted for fisheye inputs. To demonstrate the effectiveness of our contributions, we present experimental results, computed by processing the synthetic ICL-NUIM dataset of Handa et al. (2014) as well as the real-world TUM-RGBD dataset of Sturm et al. (2012). For outdoor SLAM we show the results of our system processing the autonomous driving KITTI and KITTI-360 datasets of Geiger et al. (2012a) and Liao et al. (2021) respectively
    • …
    corecore