143 research outputs found

    UnRectDepthNet: Self-Supervised Monocular Depth Estimation using a Generic Framework for Handling Common Camera Distortion Models

    Full text link
    In classical computer vision, rectification is an integral part of multi-view depth estimation. It typically includes epipolar rectification and lens distortion correction. This process simplifies the depth estimation significantly, and thus it has been adopted in CNN approaches. However, rectification has several side effects, including a reduced field of view (FOV), resampling distortion, and sensitivity to calibration errors. The effects are particularly pronounced in case of significant distortion (e.g., wide-angle fisheye cameras). In this paper, we propose a generic scale-aware self-supervised pipeline for estimating depth, euclidean distance, and visual odometry from unrectified monocular videos. We demonstrate a similar level of precision on the unrectified KITTI dataset with barrel distortion comparable to the rectified KITTI dataset. The intuition being that the rectification step can be implicitly absorbed within the CNN model, which learns the distortion model without increasing complexity. Our approach does not suffer from a reduced field of view and avoids computational costs for rectification at inference time. To further illustrate the general applicability of the proposed framework, we apply it to wide-angle fisheye cameras with 190^\circ horizontal field of view. The training framework UnRectDepthNet takes in the camera distortion model as an argument and adapts projection and unprojection functions accordingly. The proposed algorithm is evaluated further on the KITTI rectified dataset, and we achieve state-of-the-art results that improve upon our previous work FisheyeDistanceNet. Qualitative results on a distorted test scene video sequence indicate excellent performance https://youtu.be/K6pbx3bU4Ss.Comment: Minor fixes added after IROS 2020 Camera ready submission. IROS 2020 presentation video - https://www.youtube.com/watch?v=3Br2KSWZRr

    A Full Scale Camera Calibration Technique with Automatic Model Selection – Extension and Validation

    Get PDF
    This thesis presents work on the testing and development of a complete camera calibration approach which can be applied to a wide range of cameras equipped with normal, wide-angle, fish-eye, or telephoto lenses. The full scale calibration approach estimates all of the intrinsic and extrinsic parameters. The calibration procedure is simple and does not require prior knowledge of any parameters. The method uses a simple planar calibration pattern. Closed-form estimates for the intrinsic and extrinsic parameters are computed followed by nonlinear optimization. Polynomial functions are used to describe the lens projection instead of the commonly used radial model. Statistical information criteria are used to automatically determine the complexity of the lens distortion model. In the first stage experiments were performed to verify and compare the performance of the calibration method. Experiments were performed on a wide range of lenses. Synthetic data was used to simulate real data and validate the performance. Synthetic data was also used to validate the performance of the distortion model selection which uses Information Theoretic Criterion (AIC) to automatically select the complexity of the distortion model. In the second stage work was done to develop an improved calibration procedure which addresses shortcomings of previously developed method. Experiments on the previous method revealed that the estimation of the principal point during calibration was erroneous for lenses with a large focal length. To address this issue the calibration method was modified to include additional methods to accurately estimate the principal point in the initial stages of the calibration procedure. The modified procedure can now be used to calibrate a wide spectrum of imaging systems including telephoto and verifocal lenses. Survey of current work revealed a vast amount of research concentrating on calibrating only the distortion of the camera. In these methods researchers propose methods to calibrate only the distortion parameters and suggest using other popular methods to find the remaining camera parameters. Using this proposed methodology we apply distortion calibration to our methods to separate the estimation of distortion parameters. We show and compare the results with the original method on a wide range of imaging systems

    The Geometry and Usage of the Supplementary Fisheye Lenses in Smartphones

    Get PDF
    Nowadays, mobile phones are more than a device that can only satisfy the communication need between people. Since fisheye lenses integrated with mobile phones are lightweight and easy to use, they are advantageous. In addition to this advantage, it is experimented whether fisheye lens and mobile phone combination can be used in a photogrammetric way, and if so, what will be the result. Fisheye lens equipment used with mobile phones was tested in this study. For this, standard calibration of ‘Olloclip 3 in one’ fisheye lens used with iPhone 4S mobile phone and ‘Nikon FC‐E9’ fisheye lens used with Nikon Coolpix8700 are compared based on equidistant model. This experimental study shows that Olloclip 3 in one fisheye lens developed for mobile phones has at least the similar characteristics with classic fisheye lenses. The dimensions of fisheye lenses used with smart phones are getting smaller and the prices are reducing. Moreover, as verified in this study, the accuracy of fisheye lenses used in smartphones is better than conventional fisheye lenses. The use of smartphones with fisheye lenses will give the possibility of practical applications to ordinary users in the near future

    Neural Lens Modeling

    Full text link
    Recent methods for 3D reconstruction and rendering increasingly benefit from end-to-end optimization of the entire image formation process. However, this approach is currently limited: effects of the optical hardware stack and in particular lenses are hard to model in a unified way. This limits the quality that can be achieved for camera calibration and the fidelity of the results of 3D reconstruction. In this paper, we propose NeuroLens, a neural lens model for distortion and vignetting that can be used for point projection and ray casting and can be optimized through both operations. This means that it can (optionally) be used to perform pre-capture calibration using classical calibration targets, and can later be used to perform calibration or refinement during 3D reconstruction, e.g., while optimizing a radiance field. To evaluate the performance of our proposed model, we create a comprehensive dataset assembled from the Lensfun database with a multitude of lenses. Using this and other real-world datasets, we show that the quality of our proposed lens model outperforms standard packages as well as recent approaches while being much easier to use and extend. The model generalizes across many lens types and is trivial to integrate into existing 3D reconstruction and rendering systems.Comment: To be presented at CVPR 2023, Project webpage: https://neural-lens.github.i

    How to turn your camera into a perfect pinhole model

    Full text link
    Camera calibration is a first and fundamental step in various computer vision applications. Despite being an active field of research, Zhang's method remains widely used for camera calibration due to its implementation in popular toolboxes. However, this method initially assumes a pinhole model with oversimplified distortion models. In this work, we propose a novel approach that involves a pre-processing step to remove distortions from images by means of Gaussian processes. Our method does not need to assume any distortion model and can be applied to severely warped images, even in the case of multiple distortion sources, e.g., a fisheye image of a curved mirror reflection. The Gaussian processes capture all distortions and camera imperfections, resulting in virtual images as though taken by an ideal pinhole camera with square pixels. Furthermore, this ideal GP-camera only needs one image of a square grid calibration pattern. This model allows for a serious upgrade of many algorithms and applications that are designed in a pure projective geometry setting but with a performance that is very sensitive to nonlinear lens distortions. We demonstrate the effectiveness of our method by simplifying Zhang's calibration method, reducing the number of parameters and getting rid of the distortion parameters and iterative optimization. We validate by means of synthetic data and real world images. The contributions of this work include the construction of a virtual ideal pinhole camera using Gaussian processes, a simplified calibration method and lens distortion removal.Comment: 15 pages, 3 figures, conference CIAR

    A GENERALIZED NON-LINEAR METHOD FOR DISTORTION CORRECTION AND TOP-DOWN VIEW CONVERSION OF FISH EYE IMAGES

    Get PDF

    Application for photogrammetry of organisms

    Get PDF
    Single-camera photogrammetry is a well-established procedure to retrieve quantitative information from objects using photography. In biological sciences, photogrammetry is often applied to aid in morphometry studies, focusing on the comparative study of shapes and organisms. Two types of photogrammetry are used in morphometric studies: 2D photogrammetry, where distance and angle measurements are used to quantitatively describe attributes of an object, and 3D photogrammetry, where data on landmark coordinates are used to reconstruct an object true shape. Although there are excellent software tools for 3D photogrammetry available, software specifically designed to aid in the somewhat simpler 2D photogrammetry are lacking. Therefore, most studies applying 2D photogrammetry, still rely on manual acquisition of measurements from pictures, that must then be scaled to an appropriate measuring system. This is often a laborious multistep process, on most cases utilizing diverse software to complete different tasks. In addition to being time-consuming, it is also error-prone since measurement recording is often made manually. The present work aimed at tackling those issues by implementing a new cross-platform software able to integrate and streamline the photogrammetry workflow usually applied in 2D photogrammetry studies. Results from a preliminary study show a decrease of 45% in processing time when using the software developed in the scope of this work in comparison with a competing methodology. Existing limitations and future work towards improved versions of the software are discussed.Fotogrametria em câmera única é um procedimento bem estabelecido para recolher dados quantitativos de objectos através de fotografias. Em biologia, fotogrametria é frequentemente aplicada no contexto de estudos morfométricos, focando-se no estudo comparativo de formas e organismos. Nos estudos morfométricos são utilizados dois tipos de aplicação fotogramétrica: fotogrametria 2D, onde são utilizadas medidas de distância e ângulo para quantitativamente descrever atributos de um objecto, e fotogrametria 3D, onde são utilizadas coordenadas de referência de forma a reconstruir a verdadeira forma de um objeto. Apesar da existência de uma elevada variedade de software no contexto de fotogrametria 3D, a variedade de software concebida especificamente para a a aplicação de fotogrametria 2D é ainda muito reduzida. Consequentemente, é comum observar estudos onde fotogrametria 2D é utilizada através da aquisição manual de medidas a partir de imagens, que posteriormente necessitam de ser escaladas para um sistema apropriado de medida. Este processo de várias etapas é frequentemente moroso e requer a aplicação de diferentes programas de software. Além de ser moroso, é também susceptível a erros, dada a natureza manual na aquisição de dados. O presente trabalho visou abordar os problemas descritos através da implementação de um novo software multiplataforma capaz de integrar e agilizar o processo de fotogrametria presentes em estudos que requerem fotogrametria 2D. Resultados preliminares demonstram um decréscimo de 45% em tempo de processamento na utilização do software desenvolvido no âmbito deste trabalho quando comparado a uma metodologia concorrente. Limitações existentes e trabalho futuro são discutidos

    Automated Traffic Analysis in Aerial Images

    Get PDF
    corecore