28,953 research outputs found

    Efficient generic calibration method for general cameras with single centre of projection

    Get PDF
    Generic camera calibration is a non-parametric calibration technique that is applicable to any type of vision sensor. However, the standard generic calibration method was developed with the goal of generality and it is therefore sub-optimal for the common case of cameras with a single centre of projection (e.g. pinhole, fisheye, hyperboloidal catadioptric). This paper proposes novel improvements to the standard generic calibration method for central cameras that reduce its complexity, and improve its accuracy and robustness. Improvements are achieved by taking advantage of the geometric constraints resulting from a single centre of projection. Input data for the algorithm is acquired using active grids, the performance of which is characterised. A new linear estimation stage to the generic algorithm is proposed incorporating classical pinhole calibration techniques, and it is shown to be significantly more accurate than the linear estimation stage of the standard method. A linear method for pose estimation is also proposed and evaluated against the existing polynomial method. Distortion correction and motion reconstruction experiments are conducted with real data for a hyperboloidal catadioptric sensor for both the standard and proposed methods. Results show the accuracy and robustness of the proposed method to be superior to those of the standard method

    Autocalibration with the Minimum Number of Cameras with Known Pixel Shape

    Get PDF
    In 3D reconstruction, the recovery of the calibration parameters of the cameras is paramount since it provides metric information about the observed scene, e.g., measures of angles and ratios of distances. Autocalibration enables the estimation of the camera parameters without using a calibration device, but by enforcing simple constraints on the camera parameters. In the absence of information about the internal camera parameters such as the focal length and the principal point, the knowledge of the camera pixel shape is usually the only available constraint. Given a projective reconstruction of a rigid scene, we address the problem of the autocalibration of a minimal set of cameras with known pixel shape and otherwise arbitrarily varying intrinsic and extrinsic parameters. We propose an algorithm that only requires 5 cameras (the theoretical minimum), thus halving the number of cameras required by previous algorithms based on the same constraint. To this purpose, we introduce as our basic geometric tool the six-line conic variety (SLCV), consisting in the set of planes intersecting six given lines of 3D space in points of a conic. We show that the set of solutions of the Euclidean upgrading problem for three cameras with known pixel shape can be parameterized in a computationally efficient way. This parameterization is then used to solve autocalibration from five or more cameras, reducing the three-dimensional search space to a two-dimensional one. We provide experiments with real images showing the good performance of the technique.Comment: 19 pages, 14 figures, 7 tables, J. Math. Imaging Vi

    A framework for forensic face recognition based on recognition performance calibrated for the quality of image pairs

    Get PDF
    Recently, it has been shown that performance of a face recognition system depends on the quality of both face images participating in the recognition process: the reference and the test image. In the context of forensic face recognition, this observation has two implications: a) the quality of the trace (extracted from CCTV footage) constrains the performance achievable using a particular face recognition system; b) the quality of the suspect reference set (to which the trace is matched against) can be judiciously chosen to approach optimal recognition performance under such a constraint. Motivated by these recent findings, we propose a framework for forensic face recognition that is based on calibrating the recognition performance for the quality of pairs of images. The application of this framework to several mock-up forensic cases, created entirely from the MultiPIE dataset, shows that optimal recognition performance, under such a constraint, can be achieved by matching the quality (pose, illumination, and, imaging device) of the reference set to that of the trace. This improvement in recognition performance helps reduce the rate of misleading interpretation of the evidence

    Towards dynamic camera calibration for constrained flexible mirror imaging

    Get PDF
    Flexible mirror imaging systems consisting of a perspective camera viewing a scene reflected in a flexible mirror can provide direct control over image field-of-view and resolution. However, calibration of such systems is difficult due to the vast range of possible mirror shapes and the flexible nature of the system. This paper proposes the fundamentals of a dynamic calibration approach for flexible mirror imaging systems by examining the constrained case of single dimensional flexing. The calibration process consists of an initial primary calibration stage followed by in-service dynamic calibration. Dynamic calibration uses a linear approximation to initialise a non-linear minimisation step, the result of which is the estimate of the mirror surface shape. The method is easier to implement than existing calibration methods for flexible mirror imagers, requiring only two images of a calibration grid for each dynamic calibration update. Experimental results with both simulated and real data are presented that demonstrate the capabilities of the proposed approach

    New instruments and technologies for Cultural Heritage survey: full integration between point clouds and digital photogrammetry

    Get PDF
    In the last years the Geomatic Research Group of the Politecnico di Torino faced some new research topics about new instruments for point cloud generation (e.g. Time of Flight cameras) and strong integration between multi-image matching techniques and 3D Point Cloud information in order to solve the ambiguities of the already known matching algorithms. ToF cameras can be a good low cost alternative to LiDAR instruments for the generation of precise and accurate point clouds: up to now the application range is still limited but in a near future they will be able to satisfy the most part of the Cultural Heritage metric survey requirements. On the other hand multi-image matching techniques with a correct and deep integration of the point cloud information can give the correct solution for an "intelligent" survey of the geometric object break-lines, which are the correct starting point for a complete survey. These two research topics are strictly connected to a modern Cultural Heritage 3D survey approach. In this paper after a short analysis of the achieved results, an alternative possible scenario for the development of the metric survey approach inside the wider topic of Cultural Heritage Documentation is reporte

    A LabVIEW® based generic CT scanner control software platform

    Get PDF
    UGCT, the Centre for X-ray tomography at Ghent University (Belgium) does research on X-ray tomography and its applications. This includes the development and construction of state-of-the-art CT scanners for scientific research. Because these scanners are built for very different purposes they differ considerably in their physical implementations. However, they all share common principle functionality. In this context a generic software platform was developed using LabVIEW (R) in order to provide the same interface and functionality on all scanners. This article describes the concept and features of this software, and its potential for tomography in a research setting. The core concept is to rigorously separate the abstract operation of a CT scanner from its actual physical configuration. This separation is achieved by implementing a sender-listener architecture. The advantages are that the resulting software platform is generic, scalable, highly efficient, easy to develop and to extend, and that it can be deployed on future scanners with minimal effort

    Keyframe-based visual–inertial odometry using nonlinear optimization

    Get PDF
    Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy

    Control Software for the SST-1M Small-Size Telescope prototype for the Cherenkov Telescope Array

    Full text link
    The SST-1M is a 4-m Davies--Cotton atmospheric Cherenkov telescope optimized to provide gamma-ray sensitivity above a few TeV. The SST-1M is proposed as part of the Small-Size Telescope array for the Cherenkov Telescope Array (CTA), the first prototype has already been deployed. The SST-1M control software of all subsystems (active mirror control, drive system, safety system, photo-detection plane, DigiCam, CCD cameras) and the whole telescope itself (master controller) uses the standard software design proposed for all CTA telescopes based on the ALMA Common Software (ACS) developed to control the Atacama Large Millimeter Array (ALMA). Each subsystem is represented by a separate ACS component, which handles the communication to and the operation of the subsystem. Interfacing with the actual hardware is performed via the OPC UA communication protocol, supported either natively by dedicated industrial standard servers (PLCs) or separate service applications developed to wrap lower level protocols (e.g. CAN bus, camera slow control) into OPC UA. Early operations of the telescope without the camera were already carried out. The camera is fully assembled and is capable to perform data acquisition using artificial light source.Comment: In Proceedings of the 35th International Cosmic Ray Conference (ICRC2017), Busan, Korea. All CTA contributions at arXiv:1709.0348
    corecore