2,697 research outputs found

    Simultaneous fusion, classification, andtraction of moving obstacles by LIDAR and camera using Bayesian algorithm

    Get PDF
    In the near future, preventing collisions with fixed or moving, alive, and inanimate obstacles will appear to be a severe challenge due to the increased use of Unmanned Ground Vehicles (UGVs). Light Detection and Ranging (LIDAR) sensors and cameras are usually used in UGV to detect obstacles. The definite tracing and classification of moving obstacles is a significant dimension in developed driver assistance systems. It is believed that the perceived model of the situation can be improved by incorporating the obstacle classification. The present study indicated a multi-hypotheses monitoring and classifying approach, which allows solving ambiguities rising with the last methods of associating and classifying targets and tracks in a highly volatile vehicular situation. This method was tested through real data from various driving scenarios and focusing on two obstacles of interest vehicle, pedestrian.In the near future, preventing collisions with fixed or moving, alive, and inanimate obstacles will appear to be a severe challenge due to the increased use of Unmanned Ground Vehicles (UGVs). Light Detection and Ranging (LIDAR) sensors and cameras are usually used in UGV to detect obstacles. The definite tracing and classification of moving obstacles is a significant dimension in developed driver assistance systems. It is believed that the perceived model of the situation can be improved by incorporating the obstacle classification. The present study indicated a multi-hypotheses monitoring and classifying approach, which allows solving ambiguities rising with the last methods of associating and classifying targets and tracks in a highly volatile vehicular situation. This method was tested through real data from various driving scenarios and focusing on two obstacles of interest vehicle, pedestrian

    GNSS/LiDAR-Based Navigation of an Aerial Robot in Sparse Forests

    Get PDF
    Autonomous navigation of unmanned vehicles in forests is a challenging task. In such environments, due to the canopies of the trees, information from Global Navigation Satellite Systems (GNSS) can be degraded or even unavailable. Also, because of the large number of obstacles, a previous detailed map of the environment is not practical. In this paper, we solve the complete navigation problem of an aerial robot in a sparse forest, where there is enough space for the flight and the GNSS signals can be sporadically detected. For localization, we propose a state estimator that merges information from GNSS, Attitude and Heading Reference Systems (AHRS), and odometry based on Light Detection and Ranging (LiDAR) sensors. In our LiDAR-based odometry solution, the trunks of the trees are used in a feature-based scan matching algorithm to estimate the relative movement of the vehicle. Our method employs a robust adaptive fusion algorithm based on the unscented Kalman filter. For motion control, we adopt a strategy that integrates a vector field, used to impose the main direction of the movement for the robot, with an optimal probabilistic planner, which is responsible for obstacle avoidance. Experiments with a quadrotor equipped with a planar LiDAR in an actual forest environment is used to illustrate the effectiveness of our approach

    Near-real time forest change detection using PlanetScope imagery

    Get PDF
    © 2020 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group. To combat global deforestation, monitoring forest disturbances at sub-annual scales is a key challenge. For this purpose, the new Planetscope nano-satellite constellation is a game changer, with a revisit time of 1 day and a pixel size of 3-m. We present a near-real time forest disturbance alert system based on PlanetScope imagery: the Thresholding Rewards and Penances algorithm (TRP). It produces a new forest change map as soon as a new PlanetScope image is acquired. To calibrate and validate TRP, a reference set was constructed as a complete census of five randomly selected study areas in Tuscany, Italy. We processed 572 PlanetScope images acquired between 1 May 2018 and 5 July 2019. TRP was used to construct forest change maps during the study period for which the final user’s accuracy was 86% and the final producer’s accuracy was 92%. In addition, we estimated the forest change area using an unbiased stratified estimator that can be used with a small sample of reference data. The 95% confidence interval for the sample-based estimate of 56.89 ha included the census-based area estimate of 56.19 ha.s
    corecore