1,777 research outputs found

    Calibration of RGB camera with velodyne LiDAR

    Get PDF
    Calibration of the LiDAR sensor with RGB camera finds its usage in many application fields from enhancing image classification to the environment perception and mapping. This paper presents a pipeline for mutual pose and orientation estimation of the mentioned sensors using a coarse to fine approach. Previously published methods use multiple views of a known chessboard marker for computing the calibration parameters, or they are limited to the calibration of the sensors with a small mutual displacement only. Our approach presents a novel 3D marker for coarse calibration which can be robustly detected in both the camera image and the LiDAR scan. It also requires only a single pair of camera-LiDAR frames for estimating large sensors displacement. Consequent refinement step searches for more accurate calibration in small subspace of calibration parameters. The paper also presents a novel way for evaluation of the calibration precision using projection error

    External multi-modal imaging sensor calibration for sensor fusion: A review

    Get PDF
    Multi-modal data fusion has gained popularity due to its diverse applications, leading to an increased demand for external sensor calibration. Despite several proven calibration solutions, they fail to fully satisfy all the evaluation criteria, including accuracy, automation, and robustness. Thus, this review aims to contribute to this growing field by examining recent research on multi-modal imaging sensor calibration and proposing future research directions. The literature review comprehensively explains the various characteristics and conditions of different multi-modal external calibration methods, including traditional motion-based calibration and feature-based calibration. Target-based calibration and targetless calibration are two types of feature-based calibration, which are discussed in detail. Furthermore, the paper highlights systematic calibration as an emerging research direction. Finally, this review concludes crucial factors for evaluating calibration methods and provides a comprehensive discussion on their applications, with the aim of providing valuable insights to guide future research directions. Future research should focus primarily on the capability of online targetless calibration and systematic multi-modal sensor calibration.Ministerio de Ciencia, Innovación y Universidades | Ref. PID2019-108816RB-I0

    Extrinsic Auto-calibration of a Camera and Laser Range Finder

    Get PDF
    This paper describes theoretical and experimental results for the auto-calibration of sensor platform consisting of a camera and a laser range finder. Real-world use of autonomous sensor platforms often requires the recalibration of sensors without an explicit calibration object. The constraints are based upon data captured simultaneously from the camera and the laser range finder while the sensor plat-form undergoes an arbitrary motion. The rigid motions of both sensors are related, so these data constrain the relative position and orientation of the camera and laser range finder. We introduce the mathematical constraints for auto-calibration techniques based upon both discrete and differential motions, and present simulated experimental results, and results from a implementation on a B21rT M Mobile Robot from iRobot Corporation. This framework could also encompass extrinsic calibration with GPS, inertial, infrared, and ultrasonic sensors

    Camera calibration in sport event scenarios

    Get PDF
    The main goal of this paper is the design of a novel and robust methodology for calibrating cameras from a single image in sport scenarios, such as a soccer field, or a basketball or tennis court. In these sport scenarios, the only references we use to calibrate the camera are the lines and circles delimiting the different regions. The first problem we address is the extraction of image primitives, including the challenging problems of shaded regions and lens distortion. From these primitives, we automatically recognise the location of the sport court in the scene by estimating the homography which matches the actual court with its projection onto the image. This is achieved even when only a few primitives are available. Finally, from this homography, we recover the camera calibration parameters. In particular, we estimate the focal length as well as the position and orientation in the 3D space. We present some experiments on models and real courts which illustrate the accuracy of the proposed methodology

    3D object reconstruction using computer vision : reconstruction and characterization applications for external human anatomical structures

    Get PDF
    Tese de doutoramento. Engenharia Informática. Faculdade de Engenharia. Universidade do Porto. 201

    Computer Vision-Based Structural Displacement Measurement Robust to Light-Induced Image Degradation for In-Service Bridges

    Get PDF
    The displacement responses of a civil engineering structure can provide important information regarding structural behaviors that help in assessing safety and serviceability. A displacement measurement using conventional devices, such as the linear variable differential transformer (LVDT), is challenging owing to issues related to inconvenient sensor installation that often requires additional temporary structures. A promising alternative is offered by computer vision, which typically provides a low-cost and non-contact displacement measurement that converts the movement of an object, mostly an attached marker, in the captured images into structural displacement. However, there is limited research on addressing light-induced measurement error caused by the inevitable sunlight in field-testing conditions. This study presents a computer vision-based displacement measurement approach tailored to a field-testing environment with enhanced robustness to strong sunlight. An image-processing algorithm with an adaptive region-of-interest (ROI) is proposed to reliably determine a marker's location even when the marker is indistinct due to unfavorable light. The performance of the proposed system is experimentally validated in both laboratory-scale and field experiments

    Per-Pixel Calibration for RGB-Depth Natural 3D Reconstruction on GPU

    Get PDF
    Ever since the Kinect brought low-cost depth cameras into consumer market, great interest has been invigorated into Red-Green-Blue-Depth (RGBD) sensors. Without calibration, a RGBD camera’s horizontal and vertical field of view (FoV) could help generate 3D reconstruction in camera space naturally on graphics processing unit (GPU), which however is badly deformed by the lens distortions and imperfect depth resolution (depth distortion). The camera’s calibration based on a pinhole-camera model and a high-order distortion removal model requires a lot of calculations in the fragment shader. In order to get rid of both the lens distortion and the depth distortion while still be able to do simple calculations in the GPU fragment shader, a novel per-pixel calibration method with look-up table based 3D reconstruction in real-time is proposed, using a rail calibration system. This rail calibration system offers possibilities of collecting infinite calibrating points of dense distributions that can cover all pixels in a sensor, such that not only lens distortions, but depth distortion can also be handled by a per-pixel D to ZW mapping. Instead of utilizing the traditional pinhole camera model, two polynomial mapping models are employed. One is a two-dimensional high-order polynomial mapping from R/C to XW=YW respectively, which handles lens distortions; and the other one is a per-pixel linear mapping from D to ZW, which can handle depth distortion. With only six parameters and three linear equations in the fragment shader, the undistorted 3D world coordinates (XW, YW, ZW) for every single pixel could be generated in real-time. The per-pixel calibration method could be applied universally on any RGBD cameras. With the alignment of RGB values using a pinhole camera matrix, it could even work on a combination of a random Depth sensor and a random RGB sensor

    Analysis of camera pose estimation using 2D scene features for augmented reality applications

    Get PDF
    La réalité augmentée (RA) a récemment eu un impact énorme sur les ingénieurs civils et les travailleurs de l'industrie de la construction, ainsi que sur leur interaction avec les plans ar-chitecturaux. La RA introduit une superposition du modèle 3D d'un bâtiment sur une image 2D non seulement comme une image globale, mais aussi potentiellement comme une repré-sentation complexe de ce qui va être construit et qui peut être visualisée par l'utilisateur. Pour insérer un modèle 3D, la caméra doit être localisée par rapport à son environnement. La lo-calisation de la caméra consiste à trouver les paramètres extérieurs de la caméra (i.e. sa po-sition et son orientation) par rapport à la scène observée et ses caractéristiques. Dans ce mémoire, des méthodes d'estimation de la pose de la caméra (position et orientation) par rapport à la scène utilisant des correspondances cercle-ellipse et lignes droites-lignes droites sont explorées. Les cercles et les lignes sont deux des caractéristiques géométriques qui sont principalement présentes dans les structures et les bâtiments. En fonction de la rela-tion entre les caractéristiques 3D et leurs images 2D correspondantes détectées dans l'image, la position et l'orientation de la caméra sont estimées.Augmented reality (AR) had recently made a huge impact on field engineers and workers in construction industry, as well as the way they interact with architectural plans. AR brings in a superimposition of the 3D model of a building onto the 2D image not only as the big picture, but also as an intricate representation of what is going to be built. In order to insert a 3D model, the camera has to be localized regarding its surroundings. Camera localization con-sists of finding the exterior parameters (i.e. its position and orientation) of the camera with respect to the viewed scene and its characteristics. In this thesis, camera pose estimation methods using circle-ellipse and straight line corre-spondences has been investigated. Circles and lines are two of the geometrical features that are mostly present in structures and buildings. Based on the relationship between the 3D features and their corresponding 2D data detected in the image, the position and orientation of the camera is estimated
    corecore