6,571 research outputs found

    Environment perception based on LIDAR sensors for real road applications

    Get PDF
    The recent developments in applications that have been designed to increase road safety require reliable and trustworthy sensors. Keeping this in mind, the most up-to-date research in the field of automotive technologies has shown that LIDARs are a very reliable sensor family. In this paper, a new approach to road obstacle classification is proposed and tested. Two different LIDAR sensors are compared by focusing on their main characteristics with respect to road applications. The viability of these sensors in real applications has been tested, where the results of this analysis are presented.The recent developments in applications that have been designed to increase road safety require reliable and trustworthy sensors. Keeping this in mind, the most up-to-date research in the field of automotive technologies has shown that LIDARs are a very reliable sensor family. In this paper, a new approach to road obstacle classification is proposed and tested. Two different LIDAR sensors are compared by focusing on their main characteristics with respect to road applications. The viability of these sensors in real applications has been tested, where the results of this analysis are presented.The work reported in this paper has been partly funded by the Spanish Ministry of Science and Innovation (TRA2007- 67786-C02-01, TRA2007-67786-C02-02, and TRA2009- 07505) and the CAM project SEGVAUTO-II.Publicad

    LiDAR and Camera Fusion Approach for Object Distance Estimation in Self-Driving Vehicles

    Get PDF
    The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation, and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the detection of objects at short and long distances. As both the sensors are capable of capturing the different attributes of the environment simultaneously, the integration of those attributes with an efficient fusion approach greatly benefits the reliable and consistent perception of the environment. This paper presents a method to estimate the distance (depth) between a self-driving car and other vehicles, objects, and signboards on its path using the accurate fusion approach. Based on the geometrical transformation and projection, low-level sensor fusion was performed between a camera and LiDAR using a 3D marker. Further, the fusion information is utilized to estimate the distance of objects detected by the RefineDet detector. Finally, the accuracy and performance of the sensor fusion and distance estimation approach were evaluated in terms of quantitative and qualitative analysis by considering real road and simulation environment scenarios. Thus the proposed lowlevel sensor fusion, based on the computational geometric transformation and projection for object distance estimation proves to be a promising solution for enabling reliable and consistent environment perception ability for autonomous vehicles. © 2020 by the authors.1

    People tracking by cooperative fusion of RADAR and camera sensors

    Get PDF
    Accurate 3D tracking of objects from monocular camera poses challenges due to the loss of depth during projection. Although ranging by RADAR has proven effective in highway environments, people tracking remains beyond the capability of single sensor systems. In this paper, we propose a cooperative RADAR-camera fusion method for people tracking on the ground plane. Using average person height, joint detection likelihood is calculated by back-projecting detections from the camera onto the RADAR Range-Azimuth data. Peaks in the joint likelihood, representing candidate targets, are fed into a Particle Filter tracker. Depending on the association outcome, particles are updated using the associated detections (Tracking by Detection), or by sampling the raw likelihood itself (Tracking Before Detection). Utilizing the raw likelihood data has the advantage that lost targets are continuously tracked even if the camera or RADAR signal is below the detection threshold. We show that in single target, uncluttered environments, the proposed method entirely outperforms camera-only tracking. Experiments in a real-world urban environment also confirm that the cooperative fusion tracker produces significantly better estimates, even in difficult and ambiguous situations
    corecore