576 research outputs found

    Focal position-controlled processing head for a laser pattern generator (LPG) for flexible micro-structuring

    Get PDF
    In micro-structuring processes a direct structuring of the substrate is, in most cases, not possible and therefore the profile is first obtained in photo resist and then, in a second step, transferred into the substrate. The resist structuring can be performed using the flexible characteristics of a laser pattern generator (LPG). In these processes, there is a beneficial relationship between the apparatus/equipment expense and the obtainable processing results. For a reproduceable processing result in all micro structuring tasks, good reproducibility of all process relevant parameters is required. In the application of a laser pattern generator, precise control of the focal position of the strongly focussed laser beam relative to the processing surface must be maintained. [Continues.

    Research on Wavelet Based Autofocus Evaluation in Micro-vision

    Get PDF
    AbstractThis paper presents the construction of two kinds of focusing measure operators defined in wavelet domain. One mechanism is that the Discrete Wavelet Transform (DWT) coefficients in high frequency subbands of in-focused image are higher than those of defocused one. The other mechanism is that the autocorrelation of an in-focused image filtered through Continuous Wavelet Transform (CWT) gives a sharper profile than blurred one does. Wavelet base, scaling factor and form to get the sum of high frequency energy are the key factors in constructing the operator. Two new focus measure operators are defined through the autofocusing experiments on the micro-vision system of the workcell for micro-alignment. The performances of two operators can be quantificationally evaluated through the comparison with two spatial domain operators Brenner Function (BF) and Squared Gradient Function (SGF). The focus resolution of the optimized DWT-based operators is 14% higher than that of BF and its computational cost is 52% approximately lower than BF's. The focus resolution of the optimized CWT-based operators is 41% lower than that of SGF whereas its computational cost is approximately 36% lower than SGF's. It shows that the wavelet based autofocus measure functions can be practically used in micro-vision applications

    Automated Optical Inspection and Image Analysis of Superconducting Radio-Frequency Cavities

    Full text link
    The inner surface of superconducting cavities plays a crucial role to achieve highest accelerating fields and low losses. For an investigation of this inner surface of more than 100 cavities within the cavity fabrication for the European XFEL and the ILC HiGrade Research Project, an optical inspection robot OBACHT was constructed. To analyze up to 2325 images per cavity, an image processing and analysis code was developed and new variables to describe the cavity surface were obtained. The accuracy of this code is up to 97% and the PPV 99% within the resolution of 15.63 μm\mu \mathrm{m}. The optical obtained surface roughness is in agreement with standard profilometric methods. The image analysis algorithm identified and quantified vendor specific fabrication properties as the electron beam welding speed and the different surface roughness due to the different chemical treatments. In addition, a correlation of ρ=0.93\rho = -0.93 with a significance of 6σ6\,\sigma between an obtained surface variable and the maximal accelerating field was found

    Pre-Flight Calibration of the Mars 2020 Rover Mastcam Zoom (Mastcam-Z) Multispectral, Stereoscopic Imager

    Get PDF
    The NASA Perseverance rover Mast Camera Zoom (Mastcam-Z) system is a pair of zoomable, focusable, multi-spectral, and color charge-coupled device (CCD) cameras mounted on top of a 1.7 m Remote Sensing Mast, along with associated electronics and two calibration targets. The cameras contain identical optical assemblies that can range in focal length from 26 mm (25.5∘×19.1∘ FOV) to 110 mm (6.2∘×4.2∘ FOV) and will acquire data at pixel scales of 148-540 μm at a range of 2 m and 7.4-27 cm at 1 km. The cameras are mounted on the rover’s mast with a stereo baseline of 24.3±0.1 cm and a toe-in angle of 1.17±0.03∘ (per camera). Each camera uses a Kodak KAI-2020 CCD with 1600×1200 active pixels and an 8 position filter wheel that contains an IR-cutoff filter for color imaging through the detectors’ Bayer-pattern filters, a neutral density (ND) solar filter for imaging the sun, and 6 narrow-band geology filters (16 total filters). An associated Digital Electronics Assembly provides command data interfaces to the rover, 11-to-8 bit companding, and JPEG compression capabilities. Herein, we describe pre-flight calibration of the Mastcam-Z instrument and characterize its radiometric and geometric behavior. Between April 26thth and May 9thth, 2019, ∼45,000 images were acquired during stand-alone calibration at Malin Space Science Systems (MSSS) in San Diego, CA. Additional data were acquired during Assembly Test and Launch Operations (ATLO) at the Jet Propulsion Laboratory and Kennedy Space Center. Results of the radiometric calibration validate a 5% absolute radiometric accuracy when using camera state parameters investigated during testing. When observing using camera state parameters not interrogated during calibration (e.g., non-canonical zoom positions), we conservatively estimate the absolute uncertainty to be 0.2 design requirement. We discuss lessons learned from calibration and suggest tactical strategies that will optimize the quality of science data acquired during operation at Mars. While most results matched expectations, some surprises were discovered, such as a strong wavelength and temperature dependence on the radiometric coefficients and a scene-dependent dynamic component to the zero-exposure bias frames. Calibration results and derived accuracies were validated using a Geoboard target consisting of well-characterized geologic samples

    Accelerating polygon beam with peculiar features

    Get PDF
    We report on a novel kind of accelerating beams that follow parabolic paths in free space. In fact, this accelerating peculiar polygon beam (APPB) is induced by the spectral phase symmetrization of the regular polygon beam (RPB) with five intensity beam (RPB) with five intensity peaks, and it preserves a peculiar symmetric structure during propagation. Specially, such beam not only exhibits autofocusing property, but also possesses two types of accelerating intensity maxima, i.e., the cusp and spot-point structure, which does not exist in the previously reported accelerating beams. We also provide a detailed insight into the theoretical origin and characteristics of this spatially accelerating beam through catastrophe theory. Moreover, an experimental scheme based on a digital micromirror device (DMD) with the binary spectral hologram is proposed to generate the target beam by precise modulation, and a longitudinal needle-like focus is observed around the focal region. The experimental results confirm the peculiar features presented in the theoretical findings. Further, the APPB is verified to exhibit self-healing property during propagation with either obstructed cusp or spot intensity maxima point reconstructing after a certain distance. Hence, we believe that the APPB will facilitate the applications in the areas of particle manipulation, material processing and optofludics

    Modeling and applications of the focus cue in conventional digital cameras

    Get PDF
    El enfoque en cámaras digitales juega un papel fundamental tanto en la calidad de la imagen como en la percepción del entorno. Esta tesis estudia el enfoque en cámaras digitales convencionales, tales como cámaras de móviles, fotográficas, webcams y similares. Una revisión rigurosa de los conceptos teóricos detras del enfoque en cámaras convencionales muestra que, a pasar de su utilidad, el modelo clásico del thin lens presenta muchas limitaciones para aplicación en diferentes problemas relacionados con el foco. En esta tesis, el focus profile es propuesto como una alternativa a conceptos clásicos como la profundidad de campo. Los nuevos conceptos introducidos en esta tesis son aplicados a diferentes problemas relacionados con el foco, tales como la adquisición eficiente de imágenes, estimación de profundidad, integración de elementos perceptuales y fusión de imágenes. Los resultados experimentales muestran la aplicación exitosa de los modelos propuestos.The focus of digital cameras plays a fundamental role in both the quality of the acquired images and the perception of the imaged scene. This thesis studies the focus cue in conventional cameras with focus control, such as cellphone cameras, photography cameras, webcams and the like. A deep review of the theoretical concepts behind focus in conventional cameras reveals that, despite its usefulness, the widely known thin lens model has several limitations for solving different focus-related problems in computer vision. In order to overcome these limitations, the focus profile model is introduced as an alternative to classic concepts, such as the near and far limits of the depth-of-field. The new concepts introduced in this dissertation are exploited for solving diverse focus-related problems, such as efficient image capture, depth estimation, visual cue integration and image fusion. The results obtained through an exhaustive experimental validation demonstrate the applicability of the proposed models

    An Automated System for Chromosome Analysis

    Get PDF
    The design, construction, and testing of a complete system to produce karyotypes and chromosome measurement data from human blood samples, and to provide a basis for statistical analysis of quantitative chromosome measurement data are described

    Focus Is All You Need: Loss Functions For Event-based Vision

    Full text link
    Event cameras are novel vision sensors that output pixel-level brightness changes ("events") instead of traditional video frames. These asynchronous sensors offer several advantages over traditional cameras, such as, high temporal resolution, very high dynamic range, and no motion blur. To unlock the potential of such sensors, motion compensation methods have been recently proposed. We present a collection and taxonomy of twenty two objective functions to analyze event alignment in motion compensation approaches (Fig. 1). We call them Focus Loss Functions since they have strong connections with functions used in traditional shape-from-focus applications. The proposed loss functions allow bringing mature computer vision tools to the realm of event cameras. We compare the accuracy and runtime performance of all loss functions on a publicly available dataset, and conclude that the variance, the gradient and the Laplacian magnitudes are among the best loss functions. The applicability of the loss functions is shown on multiple tasks: rotational motion, depth and optical flow estimation. The proposed focus loss functions allow to unlock the outstanding properties of event cameras.Comment: 29 pages, 19 figures, 4 table

    Generation of All-in-Focus Images by Noise-Robust Selective Fusion of Limited Depth-of-Field Images

    Get PDF
    The limited depth-of-field of some cameras prevents them from capturing perfectly focused images when the imaged scene covers a large distance range. In order to compensate for this problem, image fusion has been exploited for combining images captured with different camera settings, thus yielding a higher quality all-in-focus image. Since most current approaches for image fusion rely on maximizing the spatial frequency of the composed image, the fusion process is sensitive to noise. In this paper, a new algorithm for computing the all-in-focus image from a sequence of images captured with a low depth-of-field camera is presented. The proposed approach adaptively fuses the different frames of the focus sequence in order to reduce noise while preserving image features. The algorithm consists of three stages: 1) focus measure; 2) selectivity measure; 3) and image fusion. An extensive set of experimental tests has been carried out in order to compare the proposed algorithm with state-of-the-art all-in-focus methods using both synthetic and real sequences. The obtained results show the advantages of the proposed scheme even for high levels of noise
    corecore