1,414 research outputs found

    General Defocusing Particle Tracking: fundamentals and uncertainty assessment

    Full text link
    General Defocusing Particle Tracking (GDPT) is a single-camera, three-dimensional particle tracking method that determines the particle depth positions from the defocusing patterns of the corresponding particle images. GDPT relies on a reference set of experimental particle images which is used to predict the depth position of measured particle images of similar shape. While several implementations of the method are possible, its accuracy is ultimately limited by some intrinsic properties of the acquired data, such as the signal-to-noise ratio, the particle concentration, as well as the characteristics of the defocusing patterns. GDPT has been applied in different fields by different research groups, however, a deeper description and analysis of the method fundamentals has hitherto not been available. In this work, we first identity the fundamental elements that characterize a GDPT measurement. Afterwards, we present a standardized framework based on synthetic images to assess the performance of GDPT implementations in terms of measurement uncertainty and relative number of measured particles. Finally, we provide guidelines to assess the uncertainty of experimental GDPT measurements, where true values are not accessible and additional image aberrations can lead to bias errors. The data were processed using DefocusTracker, an open-source GDPT software. The datasets were created using the synthetic image generator MicroSIG and have been shared in a freely-accessible repository

    Robust Focusing using Orientation Code Matching

    Get PDF
    This paper proposes a novel scheme for image focusing by introducing a new focus measure based on self-matching methods. A unique pencil-shaped profile is identified by comparing the similarity between all patterns extracted around the same position in each scene. Based on this profile, a new criterion function called Complementary Pencil Volume (hereafter CPV) is defined to evaluate focused or defocused scenes based on similarity rate of self-matching, which visually represents the volume of a pencil-shaped profile. Among matching methods, Orientation Code Matching (hereafter OCM) is recommended due to its invariance with regards to illumination and contrasts. Several experiments using a telecentric lens are implemented to demonstrate the efficiency of proposed measures. Outstandingly, comparing Orientation Code Matching-based (hereafter OCM-based) focus measure with conventional focus measures shows that OCM-based focus measure is robust against changes of illuminations and contrast. Using this method, depth is measured by comparing the focused and defocused region in the scenes both under high and low illumination conditions

    Practical and precise projector-camera calibration

    No full text
    International audienceProjectors are important display devices for large scale augmented reality applications. However, precisely calibrating projectors with large focus distances implies a trade-off between practicality and accuracy. People either need a huge calibration board or a precise 3D model [12]. In this paper, we present a practical projector-camera calibration method to solve this problem. The user only needs a small calibration board to calibrate the system regardless of the focus distance of the projector. Results show that the root-mean-squared re-projection error (RMSE) for a 450cm projection distance is only about 4mm, even though it is calibrated using a small B4 (250 × 353mm) calibration board

    A Machine Vision Method for Correction of Eccentric Error: Based on Adaptive Enhancement Algorithm

    Full text link
    In the procedure of surface defects detection for large-aperture aspherical optical elements, it is of vital significance to adjust the optical axis of the element to be coaxial with the mechanical spin axis accurately. Therefore, a machine vision method for eccentric error correction is proposed in this paper. Focusing on the severe defocus blur of reference crosshair image caused by the imaging characteristic of the aspherical optical element, which may lead to the failure of correction, an Adaptive Enhancement Algorithm (AEA) is proposed to strengthen the crosshair image. AEA is consisted of existed Guided Filter Dark Channel Dehazing Algorithm (GFA) and proposed lightweight Multi-scale Densely Connected Network (MDC-Net). The enhancement effect of GFA is excellent but time-consuming, and the enhancement effect of MDC-Net is slightly inferior but strongly real-time. As AEA will be executed dozens of times during each correction procedure, its real-time performance is very important. Therefore, by setting the empirical threshold of definition evaluation function SMD2, GFA and MDC-Net are respectively applied to highly and slightly blurred crosshair images so as to ensure the enhancement effect while saving as much time as possible. AEA has certain robustness in time-consuming performance, which takes an average time of 0.2721s and 0.0963s to execute GFA and MDC-Net separately on ten 200pixels 200pixels Region of Interest (ROI) images with different degrees of blur. And the eccentricity error can be reduced to within 10um by our method

    Modeling and applications of the focus cue in conventional digital cameras

    Get PDF
    El enfoque en cámaras digitales juega un papel fundamental tanto en la calidad de la imagen como en la percepción del entorno. Esta tesis estudia el enfoque en cámaras digitales convencionales, tales como cámaras de móviles, fotográficas, webcams y similares. Una revisión rigurosa de los conceptos teóricos detras del enfoque en cámaras convencionales muestra que, a pasar de su utilidad, el modelo clásico del thin lens presenta muchas limitaciones para aplicación en diferentes problemas relacionados con el foco. En esta tesis, el focus profile es propuesto como una alternativa a conceptos clásicos como la profundidad de campo. Los nuevos conceptos introducidos en esta tesis son aplicados a diferentes problemas relacionados con el foco, tales como la adquisición eficiente de imágenes, estimación de profundidad, integración de elementos perceptuales y fusión de imágenes. Los resultados experimentales muestran la aplicación exitosa de los modelos propuestos.The focus of digital cameras plays a fundamental role in both the quality of the acquired images and the perception of the imaged scene. This thesis studies the focus cue in conventional cameras with focus control, such as cellphone cameras, photography cameras, webcams and the like. A deep review of the theoretical concepts behind focus in conventional cameras reveals that, despite its usefulness, the widely known thin lens model has several limitations for solving different focus-related problems in computer vision. In order to overcome these limitations, the focus profile model is introduced as an alternative to classic concepts, such as the near and far limits of the depth-of-field. The new concepts introduced in this dissertation are exploited for solving diverse focus-related problems, such as efficient image capture, depth estimation, visual cue integration and image fusion. The results obtained through an exhaustive experimental validation demonstrate the applicability of the proposed models

    The Influence of Autofocus Lenses in the Camera Calibration Process

    Full text link
    [EN] Camera calibration is a crucial step in robotics and computer vision. Accurate camera parameters are necessary to achieve robust applications. Nowadays, camera calibration process consists of adjusting a set of data to a pin-hole model, assuming that with a reprojection error close to zero, camera parameters are correct. Since all camera parameters are unknown, computed results are considered true. However, the pin-hole model does not represent the camera behavior accurately if the autofocus is considered. Real cameras with autofocus lenses change the focal length slightly to obtain sharp objects in the image, and this feature skews the calibration result if a unique pin-hole model is computed with a constant focal length. In this article, a deep analysis of the camera calibration process is done to detect and strengthen its weaknesses when autofocus lenses are used. To demonstrate that significant errors exist in computed extrinsic parameters, the camera is mounted in a robot arm to know true extrinsic camera parameters with an accuracy under 1 mm. It is also demonstrated that errors in extrinsic camera parameters are compensated with bias in intrinsic camera parameters. Since significant errors exist with autofocus lenses, a modification of the widely accepted camera calibration method using images of a planar template is presented. A pin-hole model with distance-dependent focal length is proposed to improve the calibration process substantially.Ricolfe Viala, C.; Esparza Peidro, A. (2021). The Influence of Autofocus Lenses in the Camera Calibration Process. IEEE Transactions on Instrumentation and Measurement. 70:1-15. https://doi.org/10.1109/TIM.2021.30557931157
    corecore