4,256 research outputs found

    Lidar Sensors for Autonomous Landing and Hazard Avoidance

    Get PDF
    Lidar technology will play an important role in enabling highly ambitious missions being envisioned for exploration of solar system bodies. Currently, NASA is developing a set of advanced lidar sensors, under the Autonomous Landing and Hazard Avoidance (ALHAT) project, aimed at safe landing of robotic and manned vehicles at designated sites with a high degree of precision. These lidar sensors are an Imaging Flash Lidar capable of generating high resolution three-dimensional elevation maps of the terrain, a Doppler Lidar for providing precision vehicle velocity and altitude, and a Laser Altimeter for measuring distance to the ground and ground contours from high altitudes. The capabilities of these lidar sensors have been demonstrated through four helicopter and one fixed-wing aircraft flight test campaigns conducted from 2008 through 2012 during different phases of their development. Recently, prototype versions of these landing lidars have been completed for integration into a rocket-powered terrestrial free-flyer vehicle (Morpheus) being built by NASA Johnson Space Center. Operating in closed-loop with other ALHAT avionics, the viability of the lidars for future landing missions will be demonstrated. This paper describes the ALHAT lidar sensors and assesses their capabilities and impacts on future landing missions

    Gait Recognition with Compact Lidar Sensors

    Get PDF
    In this paper, we present a comparative study on gait and activity analysis using LiDAR scanners with different resolution. Previous studies showed that gait recognition methods based on the point clouds of a Velodyne HDL-64E Rotating Multi-Beam LiDAR can be used for people re-identification in outdoor surveillance scenarios. However, the high cost and the weight of that sensor means a bottleneck for its wide application in surveillance systems. The contribution of this paper is to show that the proposed Lidar-based Gait Energy Image descriptor can be efficiently adopted to the measurements of the compact and significantly cheaper Velodyne VLP-16 LiDAR scanner, which produces point clouds with a nearly four times lower vertical resolution than HDL-64. On the other hand, due to the sparsity of the data, the VLP-16 sensor proves to be less efficient for the purpose of activity recognition, if the events are mainly characterized by fine hand movements. The evaluation is performed on five tests scenarios with multiple walking pedestrians, which have been recorded by both sensors in parallel

    SLAM-based 3D outdoor reconstructions from lidar data

    Get PDF
    The use of depth (RGBD) cameras to reconstruct large outdoor environments is not feasible due to lighting conditions and low depth range. LIDAR sensors can be used instead. Most state of the art SLAM methods are devoted to indoor environments and depth (RGBD) cameras. We have adapted two SLAM systems to work with LIDAR data. We have compared the systems for LIDAR and RGBD data by performing quantitative evaluations. Results show that the best method for LIDAR data is RTAB-Map with a clear difference. Additionally, RTAB-Map has been used to create 3D reconstructions with and without photometry from a visible color camera. This proves the potential of LIDAR sensors for the reconstruction of outdoor environments for immersion or audiovisual production applicationsPeer ReviewedPostprint (author's final draft

    Domain and Modality Gaps for LiDAR-based Person Detection on Mobile Robots

    Full text link
    Person detection is a crucial task for mobile robots navigating in human-populated environments and LiDAR sensors are promising for this task, given their accurate depth measurements and large field of view. This paper studies existing LiDAR-based person detectors with a particular focus on mobile robot scenarios (e.g. service robot or social robot), where persons are observed more frequently and in much closer ranges, compared to the driving scenarios. We conduct a series of experiments, using the recently released JackRabbot dataset and the state-of-the-art detectors based on 3D or 2D LiDAR sensors (CenterPoint and DR-SPAAM respectively). These experiments revolve around the domain gap between driving and mobile robot scenarios, as well as the modality gap between 3D and 2D LiDAR sensors. For the domain gap, we aim to understand if detectors pretrained on driving datasets can achieve good performance on the mobile robot scenarios, for which there are currently no trained models readily available. For the modality gap, we compare detectors that use 3D or 2D LiDAR, from various aspects, including performance, runtime, localization accuracy, robustness to range and crowdedness. The results from our experiments provide practical insights into LiDAR-based person detection and facilitate informed decisions for relevant mobile robot designs and applications
    • …
    corecore