561 research outputs found

    In-flight calibration of the Apollo 14 500 mm Hasselblad camera

    Get PDF
    In-flight calibration of 500-mm Hasselblad camera flown on Apollo 1

    Photogrammetric calibration of the NASA-Wallops Island image intensifier system

    Get PDF
    An image intensifier was designed for use as one of the primary tracking systems for the barium cloud experiment at Wallops Island. Two computer programs, a definitive stellar camara calibration program and a geodetic stellar camara orientation program, were originally developed at Wallops on a GE 625 computer. A mathematical procedure for determining the image intensifier distortions is outlined, and the implementation of the model in the Wallops computer programs is described. The analytical calibration of metric cameras is also discussed

    Vers l’étalonnage interne de caméra à haute précision

    Get PDF
    This dissertation focuses on internal camera calibration and, especially, on its high-precision aspects. Two main threads are followed and examined: lens chromatic aberration correction and estimation of camera intrinsic parameters. For the chromatic aberration problem, we follow a path of digital post-processing of the image in order to get rid from the color artefacts caused by dispersion phenomena of the camera lens system, leading to a noticeable color channels misalignment. In this context, the main idea is to search for a more general correction model to realign color channels than what is commonly used - different variations of radial polynomial. The latter may not be general enough to ensure stable correction for all types of cameras. Combined with an accurate detection of pattern keypoints, the most precise chromatic aberration correction is achieved by using a polynomial model, which is able to capture physical nature of color channels misalignment. Our keypoint detection yields an accuracy up to 0.05 pixels, and our experiments show its high resistance to noise and blur. Our aberration correction method, as opposed to existing software, demonstrates a final geometrical residual of less than 0.1 pixels, which is at the limit of perception by human vision. When referring to camera intrinsics calculation, the question is how to avoid residual error compensation which is inherent for global calibration methods, the main principle of which is to estimate all camera parameters simultaneously - the bundle adjustment. Detachment of the lens distortion from camera intrinsics becomes possible when the former is compensated separately, in advance. This can be done by means of the recently developed calibration harp, which captures distortion field by using the straightness measure of tightened strings in different orientations. Another difficulty, given a distortion-compensated calibration image, is how to eliminate a perspective bias. The perspective bias occurs when using centers of circular targets as keypoints, and it gets more amplified with increase of view angle. In order to avoid modelling each circle by a conic function, we rather incorporate conic affine transformation function into the minimization procedure for homography estimation. Our experiments show that separate elimination of distortion and perspective bias is effective and more stable for camera's intrinsics estimation than global calibration methodCette thèse se concentre sur le sujet de la calibration de la camera interne et, en particulier, sur les aspects de haute précision. On suit et examine deux fils principaux: la correction d'une aberration chromatique de lentille et l'estimation des paramètres intrinsèques de la caméra. Pour la problème de l'aberration chromatique, on suit un chemin de post-traitement numérique de l'image, afin de se débarrasser des artefacts de couleur provoqués par le phénomène de dispersion du système d'objectif de la caméra, ce qui produit une désalignement perceptible des canaux couleur. Dans ce contexte, l'idée principale est de trouver un modèle de correction plus général pour réaligner les canaux de couleur que ce qui est couramment utilisé - différentes variantes du polynôme radial. Celui-ci ne peut pas être suffisamment général pour assurer la correction précise pour tous les types de caméras. En combinaison avec une détection précise des points clés, la correction la plus précise de l'aberration chromatique est obtenue en utilisant un modèle polynomial qui est capable de capter la nature physique du décalage des canaux couleur. Notre détection de points clés donne une précision allant jusqu'à 0,05 pixels, et nos expériences montrent sa grande résistance au bruit et au flou. Notre méthode de correction de l’aberration, par opposition aux logiciels existants, montre une géométrique résiduelle inférieure à 0,1 pixels, ce qui est la limite de la perception de la vision humaine. En ce qui concerne l'estimation des paramètres intrinsèques de la caméra, la question est de savoir comment éviter la compensation d'erreur résiduelle qui est inhérent aux méthodes globales d'étalonnage, dont le principe fondamental consiste à estimer tous les paramètres de la caméra ensemble - l'ajustement de faisceaux. Détacher les estimations de la distorsion de la caméra et des paramètres intrinsèques devient possible lorsque la distorsion est compensée séparément. Cela peut se faire au moyen de la harpe d'étalonnage, récemment développée, qui calcule le champ de distorsion en utilisant la mesure de la rectitude de cordes tendues dans différentes orientations. Une autre difficulté, étant donnée une image déjà corrigée de la distorsion, est de savoir comment éliminer un biais perspectif. Ce biais dû à la perspective est présent quand on utilise les centres de cibles circulaires comme points clés, et il s'amplifie avec l'augmentation de l'angle de vue. Afin d'éviter la modélisation de chaque cercle par une fonction conique, nous intégrons plutôt fonction de transformation affine conique dans la procédure de minimisation pour l'estimation de l'homographie. Nos expériences montrent que l'élimination séparée de la distorsion et la correction du biais perspectif sont efficaces et plus stables pour l'estimation des paramètres intrinsèques de la caméra que la méthode d'étalonnage global

    Deep Convolutional Neural Networks for Estimating Lens Distortion Parameters

    Get PDF
    In this paper we present a convolutional neural network (CNN) to predict multiple lens distortion parameters from a single input image. Unlike other methods, our network is suitable to create high resolution output as it directly estimates the parameters from the image which then can be used to rectify even very high resolution input images. As our method it is fully automatic, it is suitable for both casual creatives and professional artists. Our results show that our network accurately predicts the lens distortion parameters of high resolution images and corrects the distortions satisfactory

    Optimised multi-camera systems for dimensional control in factory environments

    Get PDF
    As part of the United Kingdom’s Light Controlled Factory project, University College London aims to develop a large-scale multi-camera system for dimensional control tasks in manufacturing, such as part assembly and tracking. Accuracy requirements in manufacturing are demanding, and improvements in the modelling and analysis of both camera imaging and the measurement environment are essential. A major aspect to improved camera modelling is the use of monochromatic imaging of retro-reflective target points, together with a camera model designed for a particular illumination wavelength. A small-scale system for laboratory testing has been constructed using eight low-cost monochrome cameras with C-mount lenses on a rigid metal framework. Red, green and blue monochromatic light-emitting diode ring illumination has been tested, with a broadband white illumination for comparison. Potentially, accuracy may be further enhanced by the reduction in refraction errors caused by a non-homogeneous factory environment, typically manifest in varying temperatures in the workspace. A refraction modelling tool under development in the parallel European Union LUMINAR project is being used to simulate refraction in order to test methods which may be able to reduce or eliminate this effect in practice

    Specific instrumentation and diagnostics for high-intensity hadron beams

    Full text link
    An overview of various typical instruments used for high-intensity hadron beams is given. In addition, a few important diagnostic methods are discussed which are quite special for these kinds of beams.Comment: 58 pages, contribution to the CAS - CERN Accelerator School: Course on High Power Hadron Machines; 24 May - 2 Jun 2011, Bilbao, Spai

    36M-pixel synchrotron radiation micro-CT for whole secondary pulmonary lobule visualization from a large human lung specimen

    Get PDF
    A micro-CT system was developed using a 36M-pixel digital single-lens reflex camera as a cost-effective mode for large human lung specimen imaging. Scientific grade cameras used for biomedical x-ray imaging are much more expensive than consumer-grade cameras. During the past decade, advances in image sensor technology for consumer appliances have spurred the development of biomedical x-ray imaging systems using commercial digital single-lens reflex cameras fitted with high megapixel CMOS image sensors. This micro-CT system is highly specialized for visualizing whole secondary pulmonary lobules in a large human lung specimen. The secondary pulmonary lobule, a fundamental unit of the lung structure, reproduces the lung in miniature. The lung specimen is set in an acrylic cylindrical case of 36 mm diameter and 40 mm height. A field of view (FOV) of the micro-CT is 40.6 mm wide × 15.1 mm high with 3.07 μm pixel size using offset CT scanning for enlargement of the FOV. We constructed a 13,220 × 13,220 × 4912 voxel image with 3.07 μm isotropic voxel size for three-dimensional visualization of the whole secondary pulmonary lobule. Furthermore, synchrotron radiation has proved to be a powerful high-resolution imaging tool. This micro-CT system using a single-lens reflex camera and synchrotron radiation provides practical benefits of high-resolution and wide-field performance, but at low cost

    Per-Pixel Calibration for RGB-Depth Natural 3D Reconstruction on GPU

    Get PDF
    Ever since the Kinect brought low-cost depth cameras into consumer market, great interest has been invigorated into Red-Green-Blue-Depth (RGBD) sensors. Without calibration, a RGBD camera’s horizontal and vertical field of view (FoV) could help generate 3D reconstruction in camera space naturally on graphics processing unit (GPU), which however is badly deformed by the lens distortions and imperfect depth resolution (depth distortion). The camera’s calibration based on a pinhole-camera model and a high-order distortion removal model requires a lot of calculations in the fragment shader. In order to get rid of both the lens distortion and the depth distortion while still be able to do simple calculations in the GPU fragment shader, a novel per-pixel calibration method with look-up table based 3D reconstruction in real-time is proposed, using a rail calibration system. This rail calibration system offers possibilities of collecting infinite calibrating points of dense distributions that can cover all pixels in a sensor, such that not only lens distortions, but depth distortion can also be handled by a per-pixel D to ZW mapping. Instead of utilizing the traditional pinhole camera model, two polynomial mapping models are employed. One is a two-dimensional high-order polynomial mapping from R/C to XW=YW respectively, which handles lens distortions; and the other one is a per-pixel linear mapping from D to ZW, which can handle depth distortion. With only six parameters and three linear equations in the fragment shader, the undistorted 3D world coordinates (XW, YW, ZW) for every single pixel could be generated in real-time. The per-pixel calibration method could be applied universally on any RGBD cameras. With the alignment of RGB values using a pinhole camera matrix, it could even work on a combination of a random Depth sensor and a random RGB sensor

    Correction of Geometric Distortion in Photographs

    Get PDF
    Při fotografování je vzniklý obraz zpravidla geometricky zkreslen. Mezi nejběžnější typy takového zkreslení patří tzv. poduškovité nebo soudkovité zkreslení, které může při pohledu na fotografii působit rušivým dojmem. V současné době existuje několik aplikací, které umožňují takové zkreslení korigovat, avšak [proprietální, neergonomické, ...]. Tato práce popisuje současný stav poznání v oblasti korekce geom. zkreslení a navrhuje nový software. Aplikací s grafickým uživatelským rozhraním, která se zaměřením na ergonomii a jednoduchost prezentuje dvě možnosti přístupu ke korekci geometrického zkreslení ve fotografiích. Funkčnost a použitelnost výsledné aplikace byla otestována a ověřena na vzorku reálných uživatelů.The resulting image of photographing is usually geometrically distorted. The most common of such distortions are called pincushion or barrel distortion, which can cause disturbing impression in photographs. There are currently several applications that allow to correct the geometrical distortion, but [proprietary, unergonomic, ...]. This paper describes the current state of knowledge in the field of geometric distortion correction and proposes a new software. An application with graphical user interface, whitch is focused on simplicity and ergonomics. It presents two possibilities of geometric distortion correction in photographs. Functionality and usability of the application has been tested and validated on a sample of real users

    The Development of Camera Calibration Methods and Models

    Full text link
    corecore