83 research outputs found

    3D Radar and Camera Co-Calibration: A Flexible and Accurate Method for Target-based Extrinsic Calibration

    Full text link
    Advances in autonomous driving are inseparable from sensor fusion. Heterogeneous sensors are widely used for sensor fusion due to their complementary properties, with radar and camera being the most equipped sensors. Intrinsic and extrinsic calibration are essential steps in sensor fusion. The extrinsic calibration, independent of the sensor's own parameters, and performed after the sensors are installed, greatly determines the accuracy of sensor fusion. Many target-based methods require cumbersome operating procedures and well-designed experimental conditions, making them extremely challenging. To this end, we propose a flexible, easy-to-reproduce and accurate method for extrinsic calibration of 3D radar and camera. The proposed method does not require a specially designed calibration environment, and instead places a single corner reflector (CR) on the ground to iteratively collect radar and camera data simultaneously using Robot Operating System (ROS), and obtain radar-camera point correspondences based on their timestamps, and then use these point correspondences as input to solve the perspective-n-point (PnP) problem, and finally get the extrinsic calibration matrix. Also, RANSAC is used for robustness and the Levenberg-Marquardt (LM) nonlinear optimization algorithm is used for accuracy. Multiple controlled environment experiments as well as real-world experiments demonstrate the efficiency and accuracy (AED error is 15.31 pixels and Acc up to 89\%) of the proposed method

    Calibration of RGB camera with velodyne LiDAR

    Get PDF
    Calibration of the LiDAR sensor with RGB camera finds its usage in many application fields from enhancing image classification to the environment perception and mapping. This paper presents a pipeline for mutual pose and orientation estimation of the mentioned sensors using a coarse to fine approach. Previously published methods use multiple views of a known chessboard marker for computing the calibration parameters, or they are limited to the calibration of the sensors with a small mutual displacement only. Our approach presents a novel 3D marker for coarse calibration which can be robustly detected in both the camera image and the LiDAR scan. It also requires only a single pair of camera-LiDAR frames for estimating large sensors displacement. Consequent refinement step searches for more accurate calibration in small subspace of calibration parameters. The paper also presents a novel way for evaluation of the calibration precision using projection error

    A New Wave in Robotics: Survey on Recent mmWave Radar Applications in Robotics

    Full text link
    We survey the current state of millimeterwave (mmWave) radar applications in robotics with a focus on unique capabilities, and discuss future opportunities based on the state of the art. Frequency Modulated Continuous Wave (FMCW) mmWave radars operating in the 76--81GHz range are an appealing alternative to lidars, cameras and other sensors operating in the near visual spectrum. Radar has been made more widely available in new packaging classes, more convenient for robotics and its longer wavelengths have the ability to bypass visual clutter such as fog, dust, and smoke. We begin by covering radar principles as they relate to robotics. We then review the relevant new research across a broad spectrum of robotics applications beginning with motion estimation, localization, and mapping. We then cover object detection and classification, and then close with an analysis of current datasets and calibration techniques that provide entry points into radar research.Comment: 19 Pages, 11 Figures, 2 Tables, TRO Submission pendin

    Extrinsic Calibration of 2D Millimetre-Wavelength Radar Pairs Using Ego-Velocity Estimates

    Full text link
    Correct radar data fusion depends on knowledge of the spatial transform between sensor pairs. Current methods for determining this transform operate by aligning identifiable features in different radar scans, or by relying on measurements from another, more accurate sensor. Feature-based alignment requires the sensors to have overlapping fields of view or necessitates the construction of an environment map. Several existing techniques require bespoke retroreflective radar targets. These requirements limit both where and how calibration can be performed. In this paper, we take a different approach: instead of attempting to track targets or features, we rely on ego-velocity estimates from each radar to perform calibration. Our method enables calibration of a subset of the transform parameters, including the yaw and the axis of translation between the radar pair, without the need for a shared field of view or for specialized targets. In general, the yaw and the axis of translation are the most important parameters for data fusion, the most likely to vary over time, and the most difficult to calibrate manually. We formulate calibration as a batch optimization problem, show that the radar-radar system is identifiable, and specify the platform excitation requirements. Through simulation studies and real-world experiments, we establish that our method is more reliable and accurate than state-of-the-art methods. Finally, we demonstrate that the full rigid body transform can be recovered if relatively coarse information about the platform rotation rate is available.Comment: Accepted to the 2023 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2023), Seattle, Washington, USA, June 27- July 1, 202

    LiDAR and Camera Fusion Approach for Object Distance Estimation in Self-Driving Vehicles

    Get PDF
    The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation, and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the detection of objects at short and long distances. As both the sensors are capable of capturing the different attributes of the environment simultaneously, the integration of those attributes with an efficient fusion approach greatly benefits the reliable and consistent perception of the environment. This paper presents a method to estimate the distance (depth) between a self-driving car and other vehicles, objects, and signboards on its path using the accurate fusion approach. Based on the geometrical transformation and projection, low-level sensor fusion was performed between a camera and LiDAR using a 3D marker. Further, the fusion information is utilized to estimate the distance of objects detected by the RefineDet detector. Finally, the accuracy and performance of the sensor fusion and distance estimation approach were evaluated in terms of quantitative and qualitative analysis by considering real road and simulation environment scenarios. Thus the proposed lowlevel sensor fusion, based on the computational geometric transformation and projection for object distance estimation proves to be a promising solution for enabling reliable and consistent environment perception ability for autonomous vehicles. © 2020 by the authors.1

    Machine-Vision-Based Pose Estimation System Using Sensor Fusion for Autonomous Satellite Grappling

    Get PDF
    When capturing a non-cooperative satellite during an on-orbit satellite servicing mission, the position and orientation (pose) of the satellite with respect to the servicing vessel is required in order to guide the robotic arm of the vessel towards the satellite. The main objective of this research is the development of a machine vision-based pose estimation system for capturing a non-cooperative satellite. The proposed system finds the satellite pose using three types of natural geometric features: circles, lines and points, and it merges data from two monocular cameras and three different algorithms (one for each type of geometric feature) to increase the robustness of the pose estimation. It is assumed that the satellite has an interface ring (which is used to attach a satellite to the launch vehicle) and that the cameras are mounted on the robot end effector which contains the capture tool to grapple the satellite. The three algorithms are based on a feature extraction and detection scheme to provide the detected geometric features on the camera images that belong to the satellite, which its geometry is assumed to be known. Since the projection of a circle on the image plane is an ellipse, an ellipse detection system is used to find the 3D-coordinates of the center of the interface ring and its normal vector using its corresponding detected ellipse on the image plane. The sensor and data fusion is performed in two steps. In the first step, a pose solver system finds pose using the conjugate gradient method to optimize a cost function and to reduce the re-projection error of the detected features, which reduces the pose estimation error. In the second step, an extended Kalman filter merges data from the pose solver and the ellipse detection system, and gives the final estimated pose. The inputs of the pose estimation system are the camera images and the outputs are the position and orientation of the satellite with respect to the end-effector where the cameras are mounted. Virtual and real simulations using a full-scale realistic satellite-mockup and a 7DOF robotic manipulator were performed to evaluate the system performance. Two different lighting conditions and three scenarios each with a different set of features were used. Tracking of the satellite was performed successfully. The total translation error is between 25 mm and 50 mm and the total rotation error is between 2 deg and 3 deg when the target is at 0.7 m from the end effector

    External multi-modal imaging sensor calibration for sensor fusion: A review

    Get PDF
    Multi-modal data fusion has gained popularity due to its diverse applications, leading to an increased demand for external sensor calibration. Despite several proven calibration solutions, they fail to fully satisfy all the evaluation criteria, including accuracy, automation, and robustness. Thus, this review aims to contribute to this growing field by examining recent research on multi-modal imaging sensor calibration and proposing future research directions. The literature review comprehensively explains the various characteristics and conditions of different multi-modal external calibration methods, including traditional motion-based calibration and feature-based calibration. Target-based calibration and targetless calibration are two types of feature-based calibration, which are discussed in detail. Furthermore, the paper highlights systematic calibration as an emerging research direction. Finally, this review concludes crucial factors for evaluating calibration methods and provides a comprehensive discussion on their applications, with the aim of providing valuable insights to guide future research directions. Future research should focus primarily on the capability of online targetless calibration and systematic multi-modal sensor calibration.Ministerio de Ciencia, Innovación y Universidades | Ref. PID2019-108816RB-I0
    corecore