72 research outputs found

    A review of the one-parameter division undistortion model

    Get PDF
    The one-parameter division undistortion model by (Lenz, 1987) and (Fitzgibbon, 2001) is a simple radial distortion model with beneficial algebraic properties that allows to reason about some problems analytically that can only be handled numerically in other distortion models. One property of this distortion model is that straight lines in the undistorted image correspond to circles in the distorted image. These circles are fully described by their center point, as the radius can be calculated from the position of the center and the distortion parameter only. This publication collects the properties of this distortion model from several sources and reviews them. Moreover, we show in this publication that the space of this center is projectively isomorphic to the dual space of the undistorted image plane, i.e. its line space. Therefore, projective invariant measurements on the undistorted lines are possible by the according measurements on the centers of the distorted circles. As an example of application, we use this to find the metric distance of two parallel straight rails with known track gauge in a single uncalibrated camera image with significant radial distortion

    Correction of image radial distortion based on division model

    Get PDF
    This paper presents an approach for estimating and then removing image radial distortion. It works on a single image and does not require a special calibration. The approach is extremely useful in many applications, particularly those where human-made environments contain abundant lines. A division model is applied, in which a straight line in the distorted image is treated as a circular arc. Levenberg–Marquardt (LM) iterative nonlinear least squares method is adopted to calculate the arc’s parameters. Then “Taubin fit” is applied to obtain the initial guess of the arc’s parameters which works as the initial input to the LM iteration. This dramatically improves the convergence rate in the LM process to obtain the required parameters for correcting image radial distortion. Hough entropy, as a measure, has achieved the quantitative evaluation of the estimated distortion based on the probability distribution in one-dimensional θ Hough space. The experimental results on both synthetic and real images have demonstrated that the proposed method can robustly estimate and then remove image radial distortion with high accurac

    Neural Lens Modeling

    Full text link
    Recent methods for 3D reconstruction and rendering increasingly benefit from end-to-end optimization of the entire image formation process. However, this approach is currently limited: effects of the optical hardware stack and in particular lenses are hard to model in a unified way. This limits the quality that can be achieved for camera calibration and the fidelity of the results of 3D reconstruction. In this paper, we propose NeuroLens, a neural lens model for distortion and vignetting that can be used for point projection and ray casting and can be optimized through both operations. This means that it can (optionally) be used to perform pre-capture calibration using classical calibration targets, and can later be used to perform calibration or refinement during 3D reconstruction, e.g., while optimizing a radiance field. To evaluate the performance of our proposed model, we create a comprehensive dataset assembled from the Lensfun database with a multitude of lenses. Using this and other real-world datasets, we show that the quality of our proposed lens model outperforms standard packages as well as recent approaches while being much easier to use and extend. The model generalizes across many lens types and is trivial to integrate into existing 3D reconstruction and rendering systems.Comment: To be presented at CVPR 2023, Project webpage: https://neural-lens.github.i

    Tracking of Animals Using Airborne Cameras

    Full text link

    Towed-array calibration

    Get PDF

    Inverse problems in astronomical and general imaging

    Get PDF
    The resolution and the quality of an imaged object are limited by four contributing factors. Firstly, the primary resolution limit of a system is imposed by the aperture of an instrument due to the effects of diffraction. Secondly, the finite sampling frequency, the finite measurement time and the mechanical limitations of the equipment also affect the resolution of the images captured. Thirdly, the images are corrupted by noise, a process inherent to all imaging systems. Finally, a turbulent imaging medium introduces random degradations to the signals before they are measured. In astronomical imaging, it is the atmosphere which distorts the wavefronts of the objects, severely limiting the resolution of the images captured by ground-based telescopes. These four factors affect all real imaging systems to varying degrees. All the limitations imposed on an imaging system result in the need to deduce or reconstruct the underlying object distribution from the distorted measured data. This class of problems is called inverse problems. The key to the success of solving an inverse problem is the correct modelling of the physical processes which give rise to the corresponding forward problem. However, the physical processes have an infinite amount of information, but only a finite number of parameters can be used in the model. Information loss is therefore inevitable. As a result, the solution to many inverse problems requires additional information or prior knowledge. The application of prior information to inverse problems is a recurrent theme throughout this thesis. An inverse problem that has been an active research area for many years is interpolation, and there exist numerous techniques for solving this problem. However, many of these techniques neither account for the sampling process of the instrument nor include prior information in the reconstruction. These factors are taken into account in the proposed optimal Bayesian interpolator. The process of interpolation is also examined from the point of view of superresolution, as these processes can be viewed as being complementary. Since the principal effect of atmospheric turbulence on an incoming wavefront is a phase distortion, most of the inverse problem techniques devised for this seek to either estimate or compensate for this phase component. These techniques are classified into computer post-processing methods, adaptive optics (AO) and hybrid techniques. Blind deconvolution is a post-processing technique which uses the speckle images to estimate both the object distribution and the point spread function (PSF), the latter of which is directly related to the phase. The most successful approaches are based on characterising the PSF as the aberrations over the aperture. Since the PSF is also dependent on the atmosphere, it is possible to constrain the solution using the statistics of the atmosphere. An investigation shows the feasibility of this approach. Bispectrum is also a post-processing method which reconstructs the spectrum of the object. The key component for phase preservation is the property of phase closure, and its application as prior information for blind deconvolution is examined. Blind deconvolution techniques utilise only information in the image channel to estimate the phase which is difficult. An alternative method for phase estimation is from a Shack-Hartmann (SH) wavefront sensing channel. However, since phase information is present in both the wavefront sensing and the image channels simultaneously, both of these approaches suffer from the problem that phase information from only one channel is used. An improved estimate of the phase is achieved by a combination of these methods, ensuring that the phase estimation is made jointly from the data in both the image and the wavefront sensing measurements. This formulation, posed as a blind deconvolution framework, is investigated in this thesis. An additional advantage of this approach is that since speckle images are imaged in a narrowband, while wavefront sensing images are captured by a charge-coupled device (CCD) camera at all wavelengths, the splitting of the light does not compromise the light level for either channel. This provides a further incentive for using simultaneous data sets. The effectiveness of using Shack-Hartmann wavefront sensing data for phase estimation relies on the accuracy of locating the data spots. The commonly used method which calculates the centre of gravity of the image is in fact prone to noise and is suboptimal. An improved method for spot location based on blind deconvolution is demonstrated. Ground-based adaptive optics (AO) technologies aim to correct for atmospheric turbulence in real time. Although much success has been achieved, the space- and time-varying nature of the atmosphere renders the accurate measurement of atmospheric properties difficult. It is therefore usual to perform additional post-processing on the AO data. As a result, some of the techniques developed in this thesis are applicable to adaptive optics. One of the methods which utilise elements of both adaptive optics and post-processing is the hybrid technique of deconvolution from wavefront sensing (DWFS). Here, both the speckle images and the SH wavefront sensing data are used. The original proposal of DWFS is simple to implement but suffers from the problem where the magnitude of the object spectrum cannot be reconstructed accurately. The solution proposed for overcoming this is to use an additional set of reference star measurements. This however does not completely remove the original problem; in addition it introduces other difficulties associated with reference star measurements such as anisoplanatism and reduction of valuable observing time. In this thesis a parameterised solution is examined which removes the need for a reference star, as well as offering a potential to overcome the problem of estimating the magnitude of the object

    Recent Application in Biometrics

    Get PDF
    In the recent years, a number of recognition and authentication systems based on biometric measurements have been proposed. Algorithms and sensors have been developed to acquire and process many different biometric traits. Moreover, the biometric technology is being used in novel ways, with potential commercial and practical implications to our daily activities. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in biometrics. The topics covered in this book reflect well both aspects of development. They include biometric sample quality, privacy preserving and cancellable biometrics, contactless biometrics, novel and unconventional biometrics, and the technical challenges in implementing the technology in portable devices. The book consists of 15 chapters. It is divided into four sections, namely, biometric applications on mobile platforms, cancelable biometrics, biometric encryption, and other applications. The book was reviewed by editors Dr. Jucheng Yang and Dr. Norman Poh. We deeply appreciate the efforts of our guest editors: Dr. Girija Chetty, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park and Dr. Sook Yoon, as well as a number of anonymous reviewers

    Meshfree Approximation Methods For Free-form Optical Surfaces With Applications To Head-worn Displays

    Get PDF
    Compact and lightweight optical designs achieving acceptable image quality, field of view, eye clearance, eyebox size, operating across the visible spectrum, are the key to the success of next generation head-worn displays. The first part of this thesis reports on the design, fabrication, and analysis of off-axis magnifier designs. The first design is catadioptric and consists of two elements. The lens utilizes a diffractive optical element and the mirror has a free-form surface described with an x-y polynomial. A comparison of color correction between doublets and single layer diffractive optical elements in an eyepiece as a function of eye clearance is provided to justify the use of a diffractive optical element. The dual-element design has an 8 mm diameter eyebox, 15 mm eye clearance, 20 degree diagonal full field, and is designed to operate across the visible spectrum between 450-650 nm. 20% MTF at the Nyquist frequency with less than 3% distortion has been achieved in the dual-element head-worn display. An ideal solution for a head-worn display would be a single free-form surface mirror design. A single surface mirror does not have dispersion; therefore, color correction is not required. A single surface mirror can be made see-through by machining the appropriate surface shape on the opposite side to form a zero power shell. The second design consists of a single off-axis free-form mirror described with an x-y polynomial, which achieves a 3 mm diameter exit pupil, 15 mm eye relief, and a 24 degree diagonal full field of view. The second design achieves 10% MTF at the Nyquist frequency set by the pixel spacing of the VGA microdisplay with less than 3% distortion. Both designs have been fabricated using diamond turning techniques. Finally, this thesis addresses the question of what is the optimal surface shape for a single mirror constrained in an off-axis magnifier configuration with multiple fields? Typical optical surfaces implemented in raytrace codes today are functions mapping two dimensional vectors to real numbers. The majority of optical designs to-date have relied on conic sections and polynomials as the functions of choice. The choice of conic sections is justified since conic sections are stigmatic surfaces under certain imaging geometries. The choice of polynomials from the point of view of surface description can be challenged. A polynomial surface description may link a designer s understanding of the wavefront aberrations and the surface description. The limitations of using multivariate polynomials are described by a theorem due to Mairhuber and Curtis from approximation theory. This thesis proposes and applies radial basis functions to represent free-form optical surfaces as an alternative to multivariate polynomials. We compare the polynomial descriptions to radial basis functions using the MTF criteria. The benefits of using radial basis functions for surface description are summarized in the context of specific head-worn displays. The benefits include, for example, the performance increase measured by the MTF, or the ability to increase the field of view or pupil size. Even though Zernike polynomials are a complete and orthogonal set of basis over the unit circle and they can be orthogonalized for rectangular or hexagonal pupils using Gram-Schmidt, taking practical considerations into account, such as optimization time and the maximum number of variables available in current raytrace codes, for the specific case of the single off-axis magnifier with a 3 mm pupil, 15 mm eye relief, 24 degree diagonal full field of view, we found the Gaussian radial basis functions to yield a 20% gain in the average MTF at 17 field points compared to a Zernike (using 66 terms) and an x-y polynomial up to and including 10th order. The linear combination of radial basis function representation is not limited to circular apertures. Visualization tools such as field map plots provided by nodal aberration theory have been applied during the analysis of the off-axis systems discussed in this thesis. Full-field displays are used to establish node locations within the field of view for the dual-element head-worn display. The judicious separation of the nodes along the x-direction in the field of view results in well-behaved MTF plots. This is in contrast to an expectation of achieving better performance through restoring symmetry via collapsing the nodes to yield field-quadratic astigmatism
    corecore