1,202 research outputs found

    Indoor assistance for visually impaired people using a RGB-D camera

    Get PDF
    In this paper a navigational aid for visually impaired people is presented. The system uses a RGB-D camera to perceive the environment and implements self-localization, obstacle detection and obstacle classification. The novelty of this work is threefold. First, self-localization is performed by means of a novel camera tracking approach that uses both depth and color information. Second, to provide the user with semantic information, obstacles are classified as walls, doors, steps and a residual class that covers isolated objects and bumpy parts on the floor. Third, in order to guarantee real time performance, the system is accelerated by offloading parallel operations to the GPU. Experiments demonstrate that the whole system is running at 9 Hz

    Towards 6D MCL for LiDARs in 3D TSDF Maps on Embedded Systems with GPUs

    Full text link
    Monte Carlo Localization is a widely used approach in the field of mobile robotics. While this problem has been well studied in the 2D case, global localization in 3D maps with six degrees of freedom has so far been too computationally demanding. Hence, no mobile robot system has yet been presented in literature that is able to solve it in real-time. The computationally most intensive step is the evaluation of the sensor model, but it also offers high parallelization potential. This work investigates the massive parallelization of the evaluation of particles in truncated signed distance fields for three-dimensional laser scanners on embedded GPUs. The implementation on the GPU is 30 times as fast and more than 50 times more energy efficient compared to a CPU implementation

    Advances in the Bayesian Occupancy Filter framework using robust motion detection technique for dynamic environment monitoring

    Get PDF
    International audienceThe Bayesian Occupancy Filter provides a framework for grid-based monitoring of the dynamic environment. It allows to estimate dynamic grids, containing both information of occupancy and velocity. Clustering such grids then provides detection of the objects in the observed scene. In this paper we present recent improvements in this framework. First, multiple layers from a laser scanner are fused using opinion pool, to deal with conflicting information. Then a fast motion detection technique based on laser data and odometer/IMU information is used to separate the dynamic environment from the static one. This technique instead of performing a complete SLAM (Simultaneous Localization and Mapping) solution, is based on transferring occupancy information between consecutive data grids, the objective is to avoid false positives (static objects) like other DATMO approaches. Finally, we show the integration with Bayesian Occupancy Filter (BOF) and with the subsequent tracking module called Fast Clustering-Tracking Algorithm (FCTA). We especially show the improvements achieved in tracking results after this integration, for an intelligent vehicle application

    Parallel Computing in Mobile Robotics for RISE

    Get PDF

    C-LOG: A Chamfer Distance based method for localisation in occupancy grid-maps

    Full text link
    In this paper, the problem of localising a robot within a known two-dimensional environment is formulated as one of minimising the Chamfer Distance between the corresponding occupancy grid map and information gathered from a sensor such as a laser range finder. It is shown that this nonlinear optimisation problem can be solved efficiently and that the resulting localisation algorithm has a number of attractive characteristics when compared with the conventional particle filter based solution for robot localisation in occupancy grids. The proposed algorithm is able to perform well even when robot odometry is unavailable, insensitive to noise models and does not critically depend on any tuning parameters. Experimental results based on a number of public domain datasets as well as data collected by the authors are used to demonstrate the effectiveness of the proposed algorithm. © 2013 IEEE

    Towards Live 3D Reconstruction from Wearable Video: An Evaluation of V-SLAM, NeRF, and Videogrammetry Techniques

    Full text link
    Mixed reality (MR) is a key technology which promises to change the future of warfare. An MR hybrid of physical outdoor environments and virtual military training will enable engagements with long distance enemies, both real and simulated. To enable this technology, a large-scale 3D model of a physical environment must be maintained based on live sensor observations. 3D reconstruction algorithms should utilize the low cost and pervasiveness of video camera sensors, from both overhead and soldier-level perspectives. Mapping speed and 3D quality can be balanced to enable live MR training in dynamic environments. Given these requirements, we survey several 3D reconstruction algorithms for large-scale mapping for military applications given only live video. We measure 3D reconstruction performance from common structure from motion, visual-SLAM, and photogrammetry techniques. This includes the open source algorithms COLMAP, ORB-SLAM3, and NeRF using Instant-NGP. We utilize the autonomous driving academic benchmark KITTI, which includes both dashboard camera video and lidar produced 3D ground truth. With the KITTI data, our primary contribution is a quantitative evaluation of 3D reconstruction computational speed when considering live video.Comment: Accepted to 2022 Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC), 13 page

    6D SLAM with GPGPU computation

    Get PDF
    Abstract: The main goal was to improve a state of the art 6D SLAM algorithm with a new GPGPU-based implementation of data registration module. Data registration is based on ICP (Iterative Closest Point) algorithm that is fully implemented in the GPU with NVIDIA FERMI architecture. In our research we focus on mobile robot inspection intervention systems applicable in hazardous environments. The goal is to deliver a complete system capable of being used in real life. In this paper we demonstrate our achievements in the field of on line robot localization and mapping. We demonstrated an experiment in real large environment. We compared two strategies of data alignment -simple ICP and ICP using so called meta scan
    corecore