235 research outputs found

    Calibration Methods of Characterization Lens for Head Mounted Displays

    Get PDF
    This thesis concerns the calibration, characterization and utilization of the HMD Eye, OptoFidelity’s eye-mimicking optical camera system designed for the HMD IQ, a complete test station for near eye displays which are implemented in virtual and augmented reality systems. Its optical architecture provides a 120 degree field of view with high imaging performance and linear radial distortion, ideal for analysis of all possible object fields. HMD Eye has an external, mechanical entrance pupil that is of the same size as the human entrance pupil. Spatial frequency response (the modulation transfer function) has been used to develop sensor focus calibration methods and automation system plans. Geometrical distortion and its relation to the angular mapping function and imaging quality of the system are also considered. The nature of the user interface for human eyes, called the eyebox, and the optical properties of head mounted displays are reviewed. Head mounted displays consist usually of two near eye displays amongst other components, such as position tracking units. The HMD Eye enables looking inside the device from the eyebox and collecting optical signals (i.e. the virtual image) from the complete field of view of the device under test with a single image. The HMD Eye under inspection in this thesis is one of the ’zero’ batch, i.e. a test unit. The outcome of the calibration was that the HMD Eye unit in this thesis is focused to 1.6 m with an approximate error margin of ±10 cm. The drop of contrast reaches 50% approximately at angular frequency of 11 cycles/degree which is about 40% of the simulated values, prompting improvements in the mechanical design. Geometrical distortion results show that radial distortion is very linear (maximum error of 1%) and that tangential distortion has a diminishable effect (0.04 degrees of azimuth deviation at most) within the measurement region

    Generic camera calibration for omnifocus imaging, depth estimation and a train monitoring system

    Get PDF
    Calibrating an imaging system for its geometric properties is an important step toward understanding the process of image formation and devising techniques to invert this process to decipher interesting properties of the imaged scene. In this dissertation, we propose new optically and physically motivated models for achieving state-of-the-art geometric and photometric camera calibration. The calibration parameters are then applied as input to new algorithms in omnifocus imaging, 3D scene depth from focus and machine vision based intermodal freight train analysis. In the first prat of this dissertation, we present new progress made in the areas of camera calibration with application to omnifocus imaging and 3D scene depth from focus and point spread function calibration. In camera calibration, we propose five new calibration methods for cameras whose imaging model can represented by ideal perspective projection with small distortions due to lens shape (radial distortion) or misaligned lens-sensor configuration (decentering). In the first calibration method, we generalize pupil-centric imaging model to handle arbitrarily rotated lens-sensor configuration, where we consider the sensor tilt to be about the physical optic axis. For such a setting, we derive an analytical solution to linear camera calibration based on collinearity constraint relating the known world points and measured image points assuming no radial distortion. Our second method considers a much simpler case of Gaussian thin-lens imaging model along with non-frontal image sensor and proposes analytical solution to the linear calibration equations derived from collinearity constraint. In the third method, we generalize radial alignment constraint to non-frontal sensor configuration and derive analytical solution to the resulting linear camera calibration equations. In the fourth method, we propose the use of focal stack images of a known checkerboard scene to calibrate cameras having non-frontal sensor. In the fifth method, we show that radial distortion is a result of changing entrance pupil location as a function of incident image rays and propose a collinearity based camera calibration method under this imaging model. Based on this model, we propose a new focus measure for omnifocus imaging and apply it to compute 3D scene depth from focus. We then propose a point spread function calibration method which computes the point spread function (PSF) of a CMOS image sensor using Hadamard patterns displayed on an LCD screen placed at a fixed distance from the sensor. In the second part of the dissertation, we describe a machine vision based train monitoring system, where we propose a motion-based background subtraction method to remove background between the gaps of an inter-modal freight train. The background subtracted image frames are used to compute a panoramic mosaic of the train and compute gap length in pixels. The gap length computed in metric units using the calibration parameters of the video camera allows for analyzing the fuel efficiency of loading pattern of the given inter-modal freight train

    Lightfield Analysis and Its Applications in Adaptive Optics and Surveillance Systems

    Get PDF
    An image can only be as good as the optics of a camera or any other imaging system allows it to be. An imaging system is merely a transformation that takes a 3D world coordinate to a 2D image plane. This can be done through both linear/non-linear transfer functions. Depending on the application at hand it is easier to use some models of imaging systems over the others in certain situations. The most well-known models are the 1) Pinhole model, 2) Thin Lens Model and 3) Thick lens model for optical systems. Using light-field analysis the connection through these different models is described. A novel figure of merit is presented on using one optical model over the other for certain applications. After analyzing these optical systems, their applications in plenoptic cameras for adaptive optics applications are introduced. A new technique to use a plenoptic camera to extract information about a localized distorted planar wave front is described. CODEV simulations conducted in this thesis show that its performance is comparable to those of a Shack-Hartmann sensor and that they can potentially increase the dynamic range of angles that can be extracted assuming a paraxial imaging system. As a final application, a novel dual PTZ-surveillance system to track a target through space is presented. 22X optic zoom lenses on high resolution pan/tilt platforms recalibrate a master-slave relationship based on encoder readouts rather than complicated image processing algorithms for real-time target tracking. As the target moves out of a region of interest in the master camera, it is moved to force the target back into the region of interest. Once the master camera is moved, a precalibrated lookup table is interpolated to compute the relationship between the master/slave cameras. The homography that relates the pixels of the master camera to the pan/tilt settings of the slave camera then continue to follow the planar trajectories of targets as they move through space at high accuracies

    Review of Calibration Methods for Scheimpflug Camera

    Get PDF
    The Scheimpflug camera offers a wide range of applications in the field of typical close-range photogrammetry, particle image velocity, and digital image correlation due to the fact that the depth-of-view of Scheimpflug camera can be greatly extended according to the Scheimpflug condition. Yet, the conventional calibration methods are not applicable in this case because the assumptions used by classical calibration methodologies are not valid anymore for cameras undergoing Scheimpflug condition. Therefore, various methods have been investigated to solve the problem over the last few years. However, no comprehensive review exists that provides an insight into recent calibration methods of Scheimpflug cameras. This paper presents a survey of recent calibration methods of Scheimpflug cameras with perspective lens, including the general nonparametric imaging model, and analyzes in detail the advantages and drawbacks of the mainstream calibration models with respect to each other. Real data experiments including calibrations, reconstructions, and measurements are performed to assess the performance of the models. The results reveal that the accuracies of the RMM, PLVM, PCIM, and GNIM are basically equal, while the accuracy of GNIM is slightly lower compared with the other three parametric models. Moreover, the experimental results reveal that the parameters of the tangential distortion are likely coupled with the tilt angle of the sensor in Scheimpflug calibration models. The work of this paper lays the foundation of further research of Scheimpflug cameras

    Generalising the ideal pinhole model to multi-pupil imaging for depth recovery

    Get PDF
    This thesis investigates the applicability of computer vision camera models in recovering depth information from images, and presents a novel camera model incorporating a modified pupil plane capable of performing this task accurately from a single image. Standard models, such as the ideal pinhole, suffer a loss of depth information when projecting from the world to an image plane. Recovery of this data enables reconstruction of the original scene as well as object and 3D motion reconstruction. The major contributions of this thesis are the complete characterisation of the ideal pinhole model calibration and the development of a new multi-pupil imaging model which enables depth recovery. A comprehensive analysis of the calibration sensitivity of the ideal pinhole model is presented along with a novel method of capturing calibration images which avoid singularities in image space. Experimentation reveals a higher degree of accuracy using the new calibration images. A novel camera model employing multiple pupils is proposed which, in contrast to the ideal pinhole model, recovers scene depth. The accuracy of the multi-pupil model is demonstrated and validated through rigorous experimentation. An integral property of any camera model is the location of its pupil. To this end, the new model is expanded by generalising the location of the multi-pupil plane, thus enabling superior flexibility over traditional camera models which are confined to positioning the pupil plane to negate particular aberrations in the lens. A key step in the development of the multi-pupil model is the treatment of optical aberrations in the imaging system. The unconstrained location and configuration of the pupil plane enables the determination of optical distortions in the multi-pupil imaging model. A calibration algorithm is proposed which corrects for the optical aberrations. This allows the multi-pupil model to be applied to a multitude of imaging systems regardless of the optical quality of the lens. Experimentation validates the multi-pupil model’s accuracy in accounting for the aberrations and estimating accurate depth information from a single image. Results for object reconstruction are presented establishing the capabilities of the proposed multi-pupil imaging model

    First light of VLT/HiRISE: High-resolution spectroscopy of young giant exoplanets

    Full text link
    A major endeavor of this decade is the direct characterization of young giant exoplanets at high spectral resolution to determine the composition of their atmosphere and infer their formation processes and evolution. Such a goal represents a major challenge owing to their small angular separation and luminosity contrast with respect to their parent stars. Instead of designing and implementing completely new facilities, it has been proposed to leverage the capabilities of existing instruments that offer either high contrast imaging or high dispersion spectroscopy, by coupling them using optical fibers. In this work we present the implementation and first on-sky results of the HiRISE instrument at the very large telescope (VLT), which combines the exoplanet imager SPHERE with the recently upgraded high resolution spectrograph CRIRES using single-mode fibers. The goal of HiRISE is to enable the characterization of known companions in the HH band, at a spectral resolution of the order of R=λ/Δλ=100000R = \lambda/\Delta\lambda = 100\,000, in a few hours of observing time. We present the main design choices and the technical implementation of the system, which is constituted of three major parts: the fiber injection module inside of SPHERE, the fiber bundle around the telescope, and the fiber extraction module at the entrance of CRIRES. We also detail the specific calibrations required for HiRISE and the operations of the instrument for science observations. Finally, we detail the performance of the system in terms of astrometry, temporal stability, optical aberrations, and transmission, for which we report a peak value of \sim3.9% based on sky measurements in median observing conditions. Finally, we report on the first astrophysical detection of HiRISE to illustrate its potential.Comment: 17 pages, 15 figures, 3 tables. Submitted to A&A on 19 September 202

    Triangulation methods in engineering measurement

    Get PDF
    Industrial surveying and photogrammetry are being increasingly applied to the measurement of engineering objects which have typical dimensions in the range 2-100 metres. Both techniques are examples of the principle of triangulation. By applying photocrammetric concepts to surveying methods and vice-versa, a general approach is established which has a number of advantages. In particular. alternative strategies for constructing and analysing measurement networks are developed. These should help to strengthen the geometry and simplify the analysis. The primary results concern the use of non-levelled theodolites, which have applications on board floating objects, and three new suggestions for controlling and computing relative orientations in photogrammetry. These involve reciprocal observations with theodolites. the photographing of linear scales defined by three target points and employing cameras which have been levelled. As a secondary result, some consideration Is given to automation, and instrument design. It is suggested that polarimetry could be successfully applied to improve the transfer of orientation in confined situations, such as in mining. In addition, the potential use of electronic cameras as photo-theodolites is discussed

    Pointing, Acquisition, and Tracking Systems for Free-Space Optical Communication Links

    Get PDF
    Pointing, acquisition, and tracking (PAT) systems have been widely applied in many applications, from short-range (e.g. human motion tracking) to long-haul (e.g. missile guidance) systems. This dissertation extends the PAT system into new territory: free space optical (FSO) communication system alignment, the most important missing ingredient for practical deployment. Exploring embedded geometric invariances intrinsic to the rigidity of actuators and sensors is a key design feature. Once the configuration of the actuator and sensor is determined, the geometric invariance is fixed, which can therefore be calibrated in advance. This calibrated invariance further serves as a transformation for converting the sensor measurement to actuator action. The challenge of the FSO alignment problem lies in how to point to a 3D target by only using a 2D sensor. Two solutions are proposed: the first one exploits the invariance, known as the linear homography, embedded in the FSO applications which involve long link length between transceivers or have planar trajectories. The second one employs either an additional 2D or 1D sensor, which results in invariances known as the trifocal tensor and radial trifocal tensor, respectively. Since these invariances have been developed upon an assumption that the measurements from sensors are free from noise, including the uncertainty resulting from aberrations, a robust calibrate algorithm is required to retrieve the optimal invariance from noisy measurements. The first solution is suffcient for most of the PAT systems used for FSO alignment since a long link length constraint is generally the case. Although PAT systems are normally categorized into coarse and fine subsystems to deal with different requirements, they are proven to be governed by a linear homography. Robust calibration algorithms have been developed during this work and further verified by simulations. Two prototype systems have been developed: one serves as a fine pointing subsystem, which consists of a beam steerer and an angular resolver; while the other serves as a coarse pointing subsystem, which consists of a rotary gimbal and a camera. The average pointing errors in both prototypes were less than 170 and 700 micro-rads, respectively. PAT systems based on the second solution are capable of pointing to any target within the intersected field-of-view from both sensors because two sensors provide stereo vision to determine the depth of the target, the missing information that cannot be determined by a 2D sensor. They are only required when short-distance FSO communication links must be established. Two simulations were conducted to show the robustness of the calibration procedures and the pointing accuracy with respect to random noise

    Applications of ray tracing to a pseudophakic eye model

    Get PDF
    The calculation of IOL power using keratometry is adversely affected by recent corneal reshaping surgeries. This thesis investigates the application of ray tracing and general anterior corneal surface modeling, for the purpose of improving ophthalmic measurements and in particular, the estimation of IOL power. A new algorithm (based on a multi-step approach) for the recovery of the corneal height using videokeratography is presented. The method ensures a cubic recovery with continuous curvature; skew rays are treated in post-processing. The RMS height error is measured for three simulated (with two skewed) cornea. The total errors are 6.2 x 10⁻⁴ mm ignoring the skew ray error, and 1.7 x 10⁻⁴ mm accounting for it. The individual height errors are submicron in the latter case. The algorithm gives average errors of 2.5 x 10⁻⁴ mm for a set of calibration balls. The completion time is 2.3 s over all cases, using a standard desktop PC. A new method for the recovery of the internal ocular radii of curvature is investigated. The method is used to recover the posterior corneal radii (PII) and the anterior lens radii (PIII) given several anterior cornea models (PI) in simulation. The recovered surface powers are no more than 0.1 D(PII) and 0.006 D(PIII) in error of the true surface powers. A framework is then presented for modeling the effect of lens decenter and tilt on perceived image quality. The SQRI image quality metric is determined for a range of lens tilt and lens decenter values. These are compared with the statistical moments of the spot diagrams. The SQRI shows asymmetric degradation (with tilt for a particular decenter value) of imaging for a plane displaced -0.1 mm from best focus. For a plane displaced +0.1 mm from best focus, the SQRI is symmetric and improves regardless of the sign of tilt. The statistical moments suggest that skew does not necessarily imply poor imaging. Finally, the modeling methods developed are tested on two clinically measured eyes. Minimizing the spot size, predicts the spectacle prescription to 0.0 D(OS) and 0.1 D(OD) of the mean spherical equivalent. Adding prescribed lenses to the model eye, estimates best focus to 0.03 mm and 0.02 mm of the retinal plane; consistent with better than 6/6 VA measured for OS/OD. A VisTech VCTS 6500 contrast sensitivity chart is used to verify the eye model. A 75% match with theory is found for OS, a 50% match is found for OD
    corecore