95,133 research outputs found

    Automatic multi-camera hand-eye calibration for robotic workcells

    Get PDF
    Human-robot collaboration (HRC) is an increasingly successful research field, widely investigated for several industrial tasks. Collaborative robots can physically interact with humans in a shared environment and simultaneously guarantee an high human safety during all the working process. This can be achieved through a vision system equipped by a single or a multi camera system which can provide to the manipulator essential information about the surrounding workspace and human behavior, ensuring the collision avoidance with objects and human operators. However, in order to guarantee human safety and an excellent working system where the robot arm is aware about the surrounding environment and it can monitor operator motions, a reliable Hand-Eye calibration is needed. An additional improvement for a really safe human-robot collaboration scenario can be provided by a multi-camera hand-eye calibration. This process guarantees an improved human safety and give the robot a greater ability for collision avoidance, thanks to the presence of more sensors which ensures a constant and more reliable vision of the robot arm and its whole workspace. This thesis is mainly focused on the development of an automatic multi-camera calibration method for robotic workcells, which guarantees ah high human safety and ensure a really accurate working system. In particular, the proposed method has two main properties. It is automatic, since it exploits the robot arm with a planar target attached on its end-effector to accomplish the image acquisition phase necessary for the calibration, which is generally realized with manual procedures. This approach allows to remove as much as possible the inaccurate human intervention and to speed up the whole calibration process. The second main feature is that our approach enables the calibration of a multi-camera system suitable for robotic workcells that are larger than those commonly considered in the literature. Our multi-camera hand-eye calibration method was tested through several experiments with the Franka Emika Panda robot arm and with different sensors: Microsoft Kinect V2, Intel RealSense depth camera D455 and Intel RealSense LiDAR camera L515, in order to prove its flexibility and to test which are the hardware devices which allow to achieve the highest calibration accuracy. However, really accurate results are generally achieved through our method even in large robotic workcell where cameras are placed at a distance d=3 m from the robot arm, achieving a reprojection error even lower than 1 pixel with respect to other state-of-art methods which can not even guarantee a proper calibration at these distances. Moreover our method is compared against other single- and multi-camera calibration techniques and it was proved that the proposed calibration process achieves highest accuracy with respect to other methods found in literature, which are mainly focused on the calibration between a single camera and the robot arm.Human-robot collaboration (HRC) is an increasingly successful research field, widely investigated for several industrial tasks. Collaborative robots can physically interact with humans in a shared environment and simultaneously guarantee an high human safety during all the working process. This can be achieved through a vision system equipped by a single or a multi camera system which can provide to the manipulator essential information about the surrounding workspace and human behavior, ensuring the collision avoidance with objects and human operators. However, in order to guarantee human safety and an excellent working system where the robot arm is aware about the surrounding environment and it can monitor operator motions, a reliable Hand-Eye calibration is needed. An additional improvement for a really safe human-robot collaboration scenario can be provided by a multi-camera hand-eye calibration. This process guarantees an improved human safety and give the robot a greater ability for collision avoidance, thanks to the presence of more sensors which ensures a constant and more reliable vision of the robot arm and its whole workspace. This thesis is mainly focused on the development of an automatic multi-camera calibration method for robotic workcells, which guarantees ah high human safety and ensure a really accurate working system. In particular, the proposed method has two main properties. It is automatic, since it exploits the robot arm with a planar target attached on its end-effector to accomplish the image acquisition phase necessary for the calibration, which is generally realized with manual procedures. This approach allows to remove as much as possible the inaccurate human intervention and to speed up the whole calibration process. The second main feature is that our approach enables the calibration of a multi-camera system suitable for robotic workcells that are larger than those commonly considered in the literature. Our multi-camera hand-eye calibration method was tested through several experiments with the Franka Emika Panda robot arm and with different sensors: Microsoft Kinect V2, Intel RealSense depth camera D455 and Intel RealSense LiDAR camera L515, in order to prove its flexibility and to test which are the hardware devices which allow to achieve the highest calibration accuracy. However, really accurate results are generally achieved through our method even in large robotic workcell where cameras are placed at a distance d=3 m from the robot arm, achieving a reprojection error even lower than 1 pixel with respect to other state-of-art methods which can not even guarantee a proper calibration at these distances. Moreover our method is compared against other single- and multi-camera calibration techniques and it was proved that the proposed calibration process achieves highest accuracy with respect to other methods found in literature, which are mainly focused on the calibration between a single camera and the robot arm

    Accurate Calibration Scheme for a Multi-Camera Mobile Mapping System

    Get PDF
    Mobile mapping systems (MMS) are increasingly used for many photogrammetric and computer vision applications, especially encouraged by the fast and accurate geospatial data generation. The accuracy of point position in an MMS is mainly dependent on the quality of calibration, accuracy of sensor synchronization, accuracy of georeferencing and stability of geometric configuration of space intersections. In this study, we focus on multi-camera calibration (interior and relative orientation parameter estimation) and MMS calibration (mounting parameter estimation). The objective of this study was to develop a practical scheme for rigorous and accurate system calibration of a photogrammetric mapping station equipped with a multi-projective camera (MPC) and a global navigation satellite system (GNSS) and inertial measurement unit (IMU) for direct georeferencing. The proposed technique is comprised of two steps. Firstly, interior orientation parameters of each individual camera in an MPC and the relative orientation parameters of each cameras of the MPC with respect to the first camera are estimated. In the second step the offset and misalignment between MPC and GNSS/IMU are estimated. The global accuracy of the proposed method was assessed using independent check points. A correspondence map for a panorama is introduced that provides metric information. Our results highlight that the proposed calibration scheme reaches centimeter-level global accuracy for 3D point positioning. This level of global accuracy demonstrates the feasibility of the proposed technique and has the potential to fit accurate mapping purposes

    Accurate Calibration Scheme for a Multi-Camera Mobile Mapping System

    Get PDF
    Mobile mapping systems (MMS) are increasingly used for many photogrammetric and computer vision applications, especially encouraged by the fast and accurate geospatial data generation. The accuracy of point position in an MMS is mainly dependent on the quality of calibration, accuracy of sensor synchronization, accuracy of georeferencing and stability of geometric configuration of space intersections. In this study, we focus on multi-camera calibration (interior and relative orientation parameter estimation) and MMS calibration (mounting parameter estimation). The objective of this study was to develop a practical scheme for rigorous and accurate system calibration of a photogrammetric mapping station equipped with a multi-projective camera (MPC) and a global navigation satellite system (GNSS) and inertial measurement unit (IMU) for direct georeferencing. The proposed technique is comprised of two steps. Firstly, interior orientation parameters of each individual camera in an MPC and the relative orientation parameters of each cameras of the MPC with respect to the first camera are estimated. In the second step the offset and misalignment between MPC and GNSS/IMU are estimated. The global accuracy of the proposed method was assessed using independent check points. A correspondence map for a panorama is introduced that provides metric information. Our results highlight that the proposed calibration scheme reaches centimeter-level global accuracy for 3D point positioning. This level of global accuracy demonstrates the feasibility of the proposed technique and has the potential to fit accurate mapping purposes

    3D RECONSTRUCTION USING MULTI-VIEW IMAGING SYSTEM

    Get PDF
    This thesis presents a new system that reconstructs the 3D representation of dental casts. To maintain the integrity of the 3D representation, a standard model is built to cover the blind spots that the camera cannot reach. The standard model is obtained by scanning a real human mouth model with a laser scanner. Then the model is simplified by an algorithm which is based on iterative contraction of vertex pairs. The simplified standard model uses a local parametrization method to obtain the curvature information. The system uses a digital camera and a square tube mirror in front of the camera to capture multi-view images. The mirror is made of stainless steel in order to avoid double reflections. The reflected areas of the image are considered as images taken by the virtual cameras. Only one camera calibration is needed since the virtual cameras have the same intrinsic parameters as the real camera. Depth is computed by a simple and accurate geometry based method once the corresponding points are identified. Correspondences are selected using a feature point based stereo matching process, including fast normalized cross-correlation and simulated annealing

    Extrinsic Calibration and Ego-Motion Estimation for Mobile Multi-Sensor Systems

    Get PDF
    Autonomous robots and vehicles are often equipped with multiple sensors to perform vital tasks such as localization or mapping. The joint system of various sensors with different sensing modalities can often provide better localization or mapping results than individual sensor alone in terms of accuracy or completeness. However, to enable improved performance, two important challenges have to be addressed when dealing with multi-sensor systems. Firstly, how to accurately determine the spatial relationship between individual sensor on the robot? This is a vital task known as extrinsic calibration. Without this calibration information, measurements from different sensors cannot be fused. Secondly, how to combine data from multiple sensors to correct for the deficiencies of each sensor, and thus, provides better estimations? This is another important task known as data fusion. The core of this thesis is to provide answers to these two questions. We cover, in the first part of the thesis, aspects related to improving the extrinsic calibration accuracy, and present, in the second part, novel data fusion algorithms designed to address the ego-motion estimation problem using data from a laser scanner and a monocular camera. In the extrinsic calibration part, we contribute by revealing and quantifying the relative calibration accuracies of three common types of calibration methods, so as to offer an insight into choosing the best calibration method when multiple options are available. Following that, we propose an optimization approach for solving common motion-based calibration problems. By exploiting the Gauss-Helmert model, our approach is more accurate and robust than classical least squares model. In the data fusion part, we focus on camera-laser data fusion and contribute with two new ego-motion estimation algorithms that combine complementary information from a laser scanner and a monocular camera. The first algorithm utilizes camera image information to guide the laser scan-matching. It can provide accurate motion estimates and yet can work in general conditions without requiring a field-of-view overlap between the camera and laser scanner, nor an initial guess of the motion parameters. The second algorithm combines the camera and the laser scanner information in a direct way, assuming the field-of-view overlap between the sensors is substantial. By maximizing the information usage of both the sparse laser point cloud and the dense image, the second algorithm is able to achieve state-of-the-art estimation accuracy. Experimental results confirm that both algorithms offer excellent alternatives to state-of-the-art camera-laser ego-motion estimation algorithms

    Generalising the ideal pinhole model to multi-pupil imaging for depth recovery

    Get PDF
    This thesis investigates the applicability of computer vision camera models in recovering depth information from images, and presents a novel camera model incorporating a modified pupil plane capable of performing this task accurately from a single image. Standard models, such as the ideal pinhole, suffer a loss of depth information when projecting from the world to an image plane. Recovery of this data enables reconstruction of the original scene as well as object and 3D motion reconstruction. The major contributions of this thesis are the complete characterisation of the ideal pinhole model calibration and the development of a new multi-pupil imaging model which enables depth recovery. A comprehensive analysis of the calibration sensitivity of the ideal pinhole model is presented along with a novel method of capturing calibration images which avoid singularities in image space. Experimentation reveals a higher degree of accuracy using the new calibration images. A novel camera model employing multiple pupils is proposed which, in contrast to the ideal pinhole model, recovers scene depth. The accuracy of the multi-pupil model is demonstrated and validated through rigorous experimentation. An integral property of any camera model is the location of its pupil. To this end, the new model is expanded by generalising the location of the multi-pupil plane, thus enabling superior flexibility over traditional camera models which are confined to positioning the pupil plane to negate particular aberrations in the lens. A key step in the development of the multi-pupil model is the treatment of optical aberrations in the imaging system. The unconstrained location and configuration of the pupil plane enables the determination of optical distortions in the multi-pupil imaging model. A calibration algorithm is proposed which corrects for the optical aberrations. This allows the multi-pupil model to be applied to a multitude of imaging systems regardless of the optical quality of the lens. Experimentation validates the multi-pupil model’s accuracy in accounting for the aberrations and estimating accurate depth information from a single image. Results for object reconstruction are presented establishing the capabilities of the proposed multi-pupil imaging model

    3D Visual Perception for Self-Driving Cars using a Multi-Camera System: Calibration, Mapping, Localization, and Obstacle Detection

    Full text link
    Cameras are a crucial exteroceptive sensor for self-driving cars as they are low-cost and small, provide appearance information about the environment, and work in various weather conditions. They can be used for multiple purposes such as visual navigation and obstacle detection. We can use a surround multi-camera system to cover the full 360-degree field-of-view around the car. In this way, we avoid blind spots which can otherwise lead to accidents. To minimize the number of cameras needed for surround perception, we utilize fisheye cameras. Consequently, standard vision pipelines for 3D mapping, visual localization, obstacle detection, etc. need to be adapted to take full advantage of the availability of multiple cameras rather than treat each camera individually. In addition, processing of fisheye images has to be supported. In this paper, we describe the camera calibration and subsequent processing pipeline for multi-fisheye-camera systems developed as part of the V-Charge project. This project seeks to enable automated valet parking for self-driving cars. Our pipeline is able to precisely calibrate multi-camera systems, build sparse 3D maps for visual navigation, visually localize the car with respect to these maps, generate accurate dense maps, as well as detect obstacles based on real-time depth map extraction

    A mask-based approach for the geometric calibration of thermal-infrared cameras

    Get PDF
    Accurate and efficient thermal-infrared (IR) camera calibration is important for advancing computer vision research within the thermal modality. This paper presents an approach for geometrically calibrating individual and multiple cameras in both the thermal and visible modalities. The proposed technique can be used to correct for lens distortion and to simultaneously reference both visible and thermal-IR cameras to a single coordinate frame. The most popular existing approach for the geometric calibration of thermal cameras uses a printed chessboard heated by a flood lamp and is comparatively inaccurate and difficult to execute. Additionally, software toolkits provided for calibration either are unsuitable for this task or require substantial manual intervention. A new geometric mask with high thermal contrast and not requiring a flood lamp is presented as an alternative calibration pattern. Calibration points on the pattern are then accurately located using a clustering-based algorithm which utilizes the maximally stable extremal region detector. This algorithm is integrated into an automatic end-to-end system for calibrating single or multiple cameras. The evaluation shows that using the proposed mask achieves a mean reprojection error up to 78% lower than that using a heated chessboard. The effectiveness of the approach is further demonstrated by using it to calibrate two multiple-camera multiple-modality setups. Source code and binaries for the developed software are provided on the project Web site
    corecore