909 research outputs found
Recommended from our members
A Review and Analysis of Automatic Optical Inspection and Quality Monitoring Methods in Electronics Industry
Electronics industry is one of the fastest evolving, innovative, and most competitive industries. In order to meet the high consumption demands on electronics components, quality standards of the products must be well-maintained. Automatic optical inspection (AOI) is one of the non-destructive techniques used in quality inspection of various products. This technique is considered robust and can replace human inspectors who are subjected to dull and fatigue in performing inspection tasks. A fully automated optical inspection system consists of hardware and software setups. Hardware setup include image sensor and illumination settings and is responsible to acquire the digital image, while the software part implements an inspection algorithm to extract the features of the acquired images and classify them into defected and non-defected based on the user requirements. A sorting mechanism can be used to separate the defective products from the good ones. This article provides a comprehensive review of the various AOI systems used in electronics, micro-electronics, and opto-electronics industries. In this review the defects of the commonly inspected electronic components, such as semiconductor wafers, flat panel displays, printed circuit boards and light emitting diodes, are first explained. Hardware setups used in acquiring images are then discussed in terms of the camera and lighting source selection and configuration. The inspection algorithms used for detecting the defects in the electronic components are discussed in terms of the preprocessing, feature extraction and classification tools used for this purpose. Recent articles that used deep learning algorithms are also reviewed. The article concludes by highlighting the current trends and possible future research directions.Framework of the IQONIC Project; European Union’s Horizon 2020 Research and Innovation Program
Fooling Polarization-based Vision using Locally Controllable Polarizing Projection
Polarization is a fundamental property of light that encodes abundant
information regarding surface shape, material, illumination and viewing
geometry. The computer vision community has witnessed a blossom of
polarization-based vision applications, such as reflection removal,
shape-from-polarization, transparent object segmentation and color constancy,
partially due to the emergence of single-chip mono/color polarization sensors
that make polarization data acquisition easier than ever. However, is
polarization-based vision vulnerable to adversarial attacks? If so, is that
possible to realize these adversarial attacks in the physical world, without
being perceived by human eyes? In this paper, we warn the community of the
vulnerability of polarization-based vision, which can be more serious than
RGB-based vision. By adapting a commercial LCD projector, we achieve locally
controllable polarizing projection, which is successfully utilized to fool
state-of-the-art polarization-based vision algorithms for glass segmentation
and color constancy. Compared with existing physical attacks on RGB-based
vision, which always suffer from the trade-off between attack efficacy and eye
conceivability, the adversarial attackers based on polarizing projection are
contact-free and visually imperceptible, since naked human eyes can rarely
perceive the difference of viciously manipulated polarizing light and ordinary
illumination. This poses unprecedented risks on polarization-based vision, both
in the monochromatic and trichromatic domain, for which due attentions should
be paid and counter measures be considered
Method and Apparatus for 3D Imaging a Workpiece
To obtain a three-dimensional virtual reconstruction of a workpiece the workpiece is positioned on a display screen between the display screen and at least one imager wherein the imager acquires multiple images of the workpiece while (a) multiple light stripes are displayed and swept in a first directional orientation across the display screen, (b) multiple light stripes are displayed and swept in at least one second directional orientation across the display screen, and (c) multiple images for each position of the multiple light stripes at different exposure times are captured. From the multiple images, a difference caused by the workpiece in a width and a profile of the multiple light stripes is determined. That difference is used to calculate a depth value (z) of the workpiece at each imager pixel position (x, y). The calculated depth value is used to reconstruct a surface shape of the workpiece. In embodiments, the described transmittance light capture analyses are supplemented with reflectance light capture analyses
Desarrollo de un sistema electrónico para la captación de imágenes de un solo píxel mediante métodos acústicos
Treball final de Màster Universitari en Enginyeria Industrial. Codi: SJA020. Curs acadèmic: 2018/2019This project explains the implementation in an integrated and autonomous electronic device of an acoustic
imaging system. The device belongs to the so called single pixel devices; that is to say, this device is
able to reconstruct an image with spatial resolution using a single sensor or transducer. The key point
of these techniques is the ability to modulate the source field and then recover the signal sequentially or
by multiplexing in frequency (as it happens in this case). The image is finally reconstructed through a
computational algorithm.
All along this project you will be able to discover the physics equations underneath the problem, the image
reconstruction algorithm and its behavior, and its implementation in a real system and environment,
which is the main part of the project. Considerations and restrictions of applying a mathematical model
to the real world will appear and constraint the solution, forcing to take decisions such as the components
selection.
Simulation results will be given and discussed, validating the reconstruction algorithm. Moreover, experimental measurements will be provided and will lead the discussion to potential mistakes and ways to
improve the performance of the device
Light Field Methods for the Visual Inspection of Transparent Objects
Transparent objects play crucial roles in humans’ everyday life, must meet high quality requirements and therefore must be visually inspected. Developing automated visual inspection systems for complex-shaped transparent objects still represents a challenging task. As a solution, this book introduces light field methods for all main components of a visual inspection system: a novel light field sensor, suitable processing methods and a light field illumination approach
Eurodisplay 2019
The collection includes abstracts of reports selected by the program by the conference committee
Quantitative electroluminescence measurements of PV devices
Electroluminescence (EL) imaging is a fast and comparatively low-cost method for spatially resolved analysis of photovoltaic (PV) devices. A Silicon CCD or InGaAs camera is used to capture the near infrared radiation, emitted from a forward biased PV device. EL images can be used to identify defects, like cracks and shunts but also to map physical parameters, like series resistance.
The lack of suitable image processing routines often prevents automated and setup-independent quantitative analysis. This thesis provides a tool-set, rather than a specific solution to address this problem. Comprehensive and novel procedures to calibrate imaging systems, to evaluate image quality, to normalize images and to extract features are presented.
For image quality measurement the signal-to-noise ratio (SNR) is obtained from a set of EL images. Its spatial average depends on the size of the background area within the EL image. In this work the SNR will be calculated spatially resolved and as (background independent) averaged parameter using only one EL image and no additional information of the imaging system.
This thesis presents additional methods to measure image sharpness spatially resolved and introduces a new parameter to describe resolvable object size. This allows equalising images of different resolutions and of different sharpness allowing artefact-free comparison.
The flat field image scales the emitted EL signal to the detected image intensity. It is often measured through imaging a homogeneous light source such as a red LCD screen in close distance to the camera lens. This measurement however only partially removes vignetting the main contributor to the flat field. This work quantifies the vignetting correction quality and introduces more sophisticated vignetting measurement methods.
Especially outdoor EL imaging often includes perspective distortion of the measured PV device. This thesis presents methods to automatically detect and correct for this distortion. This also includes intensity correction due to different irradiance angles.
Single-time-effects and hot pixels are image artefacts that can impair the EL image quality. They can conceivably be confused with cell defects. Their detection and removal is described in this thesis.
The methods presented enable direct pixel-by-pixel comparison for EL images of the same device taken at different measurement and exposure times, even if imaged by different contractors.
EL statistics correlating cell intensity to crack length and PV performance parameters are extracted from EL and dark I-V curves. This allows for spatially resolved performance measurement without the need for laborious flash tests to measure the light I-V- curve.
This work aims to convince the EL community of certain calibration- and imaging routines, which will allow setup independent, automatable, standardised and therefore comparable results.
Recognizing the benefits of EL imaging for quality control and failure detection, this work paves the way towards cheaper and more reliable PV generation.
The code used in this work is made available to public as library and interactive graphical application for scientific image processing
High Resolution Vision-Based Servomechanism Using a Dynamic Target with Application to CNC Machines
This dissertation introduces a novel three dimensional vision-based servomechanism with application to real time position control for manufacturing equipment, such as Computer Numerical Control (CNC) machine tools. The proposed system directly observes the multi-dimensional position of a point on the moving tool relative to a fixed ground, thus bypassing the inaccurate kinematic model normally used to convert axis sensor-readings into an estimate of the tool position. A charge-coupled device (CCD camera) is used as the position transducer, which directly measures the current position error of the tool referenced to an absolute coordinate system. Due to the direct-sensing nature of the transducer no geometric error compensation is required. Two new signal processing algorithms, based on a recursive Newton-Raphson optimization routine, are developed to process the input data collected through digital imaging. The algorithms allow simultaneous high-precision position and orientation estimation from single readings. The desired displacement command of the tool in a planar environment is emulated, in one end of the kinematic chain, by an active element or active target pattern on a liquid-crystal display (LCD). On the other end of the kinematic chain the digital camera observes the active target and provides visual feedback information utilized for position control of the tool. Implementation is carried out on an XYθZ stage, which is position with high resolution. The introduction of the camera into the control loop yields a visual servo architecture; the dynamic problems and stability assessment of which are analyzed in depth for the case study of the single CAM- single image processing thread-configuration. Finally, two new command generation protocols are explained for full implementation of the proposed structure in real-time control applications. Command issuing resolutions do not depend upon the size of the smallest element of the grid/display being imaged, but can instead be determined in accordance with the sensor\u27s resolution
Proceedings of the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory
This book is a collection of 15 reviewed technical reports summarizing the presentations at the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory. The covered topics include image processing, optical signal processing, visual inspection, pattern recognition and classification, human-machine interaction, world and situation modeling, autonomous system localization and mapping, information fusion, and trust propagation in sensor networks
Automatic volume inspection for glass blow moulds
In the glass bottle mould making industry, volume control is done by measuring the amount of water needed to fill the mould. This process has several issues. Firstly, it requires a trained operator to properly seal the mould. Secondly, different operators will lead to different volume values. Another issue is related to the time and work necessary for the procedure, which can take up to 20 minutes for a single mould, making it unsuitable to inspect several moulds of the same series. These issues can be solved by automating the procedure. By using reverse engineering systems to obtain the internal cavity surfaces, comparative studies can be done, such as wear study, enabling the optimization of the moulds. The goal of this project is to establish a system to automate the inspection of the moulds which will result in the acquisition of the moulding surfaces. Then, the volume of the moulds and surface deviations in specific areas can be measured. The development of this project focused in two main areas: the development of a script, where the volume is calculated and the surface is inspected, from cloud points, to determine if the mould is in an acceptable state; and the study of technologies capable of acquiring the mould’s surface while simultaneously being automatable. As for this study, several case studies using laser and structured light are performed to understand the abilities and limitations of these technologies. The first study was done using polished cast iron moulds to determine the ability to acquire the surface and obtain the volume. Then, the ability to present proper comparative results is explored by using a set of unpolished cast iron moulds and then these same moulds once polished to verify if the used systems can obtain the deviations between the two situations. Finally, the validation of the technologies was done using a demo bronze mould, where surface deviations were inspected as well as a ring gauge where the inner cylinder was used for inspection. From these cases, the used laser scanner was able to obtain the volumes of the moulds as well as proper comparative results without spray. As for the used structured light system, it proved unable to acquire the surfaces of the moulds and of the ring gauge, requiring spray. Despite this performance, the system is quite automatable and a state-of-the-art structured light system, using blue light, could be used for this purpose. The laser is also a viable solution, but the cost and complexity to automate can be higher than the structured light system
- …