4 research outputs found

    Automated Automotive Radar Calibration With Intelligent Vehicles

    Full text link
    While automotive radar sensors are widely adopted and have been used for automatic cruise control and collision avoidance tasks, their application outside of vehicles is still limited. As they have the ability to resolve multiple targets in 3D space, radars can also be used for improving environment perception. This application, however, requires a precise calibration, which is usually a time-consuming and labor-intensive task. We, therefore, present an approach for automated and geo-referenced extrinsic calibration of automotive radar sensors that is based on a novel hypothesis filtering scheme. Our method does not require external modifications of a vehicle and instead uses the location data obtained from automated vehicles. This location data is then combined with filtered sensor data to create calibration hypotheses. Subsequent filtering and optimization recovers the correct calibration. Our evaluation on data from a real testing site shows that our method can correctly calibrate infrastructure sensors in an automated manner, thus enabling cooperative driving scenarios.Comment: 5 pages, 4 figures, accepted for presentation at the 31st European Signal Processing Conference (EUSIPCO), September 4 - September 8, 2023, Helsinki, Finlan

    Association of Camera and Radar Detections Using Neural Networks

    Get PDF
    Automotive radar and camera fusion relies on linear point transformations from one sensor's coordinate system to the other. However, these transformations cannot handle non-linear dynamics and are susceptible to sensor noise. Furthermore, they operate on a point-to-point basis, so it is impossible to capture all the characteristics of an object. This paper introduces a method that performs detection-to-detection association by projecting heterogeneous object features from the two sensors into a common high-dimensional space. We associate 2D bounding boxes and radar detections based on the Euclidean distance between their projections. Our method utilizes deep neural networks to transform feature vectors instead of single points. Therefore, we can leverage real-world data to learn non-linear dynamics and utilize several features to provide a better description for each object. We evaluate our association method against a traditional rule-based method, showing that it improves the accuracy of the association algorithm and it is more robust in complex scenarios with multiple objects.</p

    3D Radar and Camera Co-Calibration: A Flexible and Accurate Method for Target-based Extrinsic Calibration

    Full text link
    Advances in autonomous driving are inseparable from sensor fusion. Heterogeneous sensors are widely used for sensor fusion due to their complementary properties, with radar and camera being the most equipped sensors. Intrinsic and extrinsic calibration are essential steps in sensor fusion. The extrinsic calibration, independent of the sensor's own parameters, and performed after the sensors are installed, greatly determines the accuracy of sensor fusion. Many target-based methods require cumbersome operating procedures and well-designed experimental conditions, making them extremely challenging. To this end, we propose a flexible, easy-to-reproduce and accurate method for extrinsic calibration of 3D radar and camera. The proposed method does not require a specially designed calibration environment, and instead places a single corner reflector (CR) on the ground to iteratively collect radar and camera data simultaneously using Robot Operating System (ROS), and obtain radar-camera point correspondences based on their timestamps, and then use these point correspondences as input to solve the perspective-n-point (PnP) problem, and finally get the extrinsic calibration matrix. Also, RANSAC is used for robustness and the Levenberg-Marquardt (LM) nonlinear optimization algorithm is used for accuracy. Multiple controlled environment experiments as well as real-world experiments demonstrate the efficiency and accuracy (AED error is 15.31 pixels and Acc up to 89\%) of the proposed method

    Augmentation of Visual Odometry using Radar

    Get PDF
    As UAVs become viable for more applications, pose estimation continues to be critical. All UAVs need to know where they are at all times, in order to avoid disaster. However, in the event that UAVs are deployed in an area with poor visual conditions, such as in many disaster scenarios, many localization algorithms have difficulties working. This thesis presents VIL-DSO, a visual odometry method as a pose estimation solution, combining several different algorithms in order to improve pose estimation and provide metric scale. This thesis also presents a method for automatically determining an accurate physical transform between radar and camera data, and in doing so, allow for the projection of radar information into the image plane. Finally, this thesis presents EVIL-DSO, a method for localization that fuses visual-inertial odometry with radar information. The proposed EVIL-DSO algorithm uses radar information projected into the image plane in order to create a depth map for odometry to directly observe depth of features, which can then be used as part of the odometry algorithm to remove the need to perform costly depth estimations. Trajectory analysis of the proposed algorithm on outdoor data, compared to differential GPS data, shows that the proposed algorithm is more accurate in terms of root-mean-square error, as well as having a lower percentage of scale error. Runtime analysis shows that the proposed algorithm updates more frequently than other, similar, algorithms
    corecore