931 research outputs found

    On the implementation of the gamma function for image correction on a endoscopic camera

    Get PDF
    This paper describes part of project that implemented the image processing of a CMOS sensor for endoscopic purposes. The sensor is a small sized device of 1x1mm2 and the image processing has been done inside a FPGA. This part of the work describes the implementation of the Gamma function with a balance between the resources needed and the accuracy. A linear piecewise solution was used that stores the values for 31 gamma functions with values ranging from 1 to 4 with 0.1 steps. The solution developed is 10 bit based, was coded in VHDL and is implemented in a Spartan 6 FPGA. The results show that it is an accurate solution that has a small footprint in terms of used resources.info:eu-repo/semantics/publishedVersio

    Foveated Sampling Architectures for CMOS Image Sensors

    Get PDF
    Electronic imaging technologies are faced with the challenge of power consumption when transmitting large amounts of image data from the acquisition imager to the display or processing devices. This is especially a concern for portable applications, and becomes more prominent in increasingly high-resolution, high-frame rate imagers. Therefore, new sampling techniques are needed to minimize transmitted data, while maximizing the conveyed image information. From this point of view, two approaches have been proposed and implemented in this thesis: A system-level approach, in which the classical 1D row sampling CMOS imager is modified to a 2D ring sampling pyramidal architecture, using the same standard three transistor (3T) active pixel sensor (APS). A device-level approach, in which the classical orthogonal architecture has been preserved while altering the APS device structure, to design an expandable multiresolution image sensor. A new scanning scheme has been suggested for the pyramidal image sensor, resulting in an intrascene foveated dynamic range (FDR) similar in profile to that of the human eye. In this scheme, the inner rings of the imager have a higher dynamic range than the outer rings. The pyramidal imager transmits the sampled image through 8 parallel output channels, allowing higher frame rates. The human eye is known to have less sensitivity to oblique contrast. Using this fact on the typical oblique distribution of fixed pattern noise, we demonstrate lower perception of this noise than the orthogonal FPN distribution of classical CMOS imagers. The multiresolution image sensor principle is based on averaging regions of low interest from frame-sampled image kernels. One pixel is read from each kernel while keeping pixels in the region of interest at their high resolution. This significantly reduces the transferred data and increases the frame rate. Such architecture allows for programmability and expandability of multiresolution imaging applications

    Advanced Image Acquisition, Processing Techniques and Applications

    Get PDF
    "Advanced Image Acquisition, Processing Techniques and Applications" is the first book of a series that provides image processing principles and practical software implementation on a broad range of applications. The book integrates material from leading researchers on Applied Digital Image Acquisition and Processing. An important feature of the book is its emphasis on software tools and scientific computing in order to enhance results and arrive at problem solution

    Near Infrared Thermal Imaging for Process Monitoring in Additive Manufacturing

    Get PDF
    This work presents the design and development of a near infrared thermal imaging system specifically designed for process monitoring of additive manufacturing. The overall aims of the work were to use in situ thermal imaging to develop methods for monitoring process parameters of additive manufacturing processes. The main motivations are the recent growth in use of additive manufacturing and the underutilisation of near infrared camera technology in thermal imaging. The combination of these two technologies presents opportunities for unique process monitoring methods which are demonstrated here. A thermal imaging system was designed for monitoring the electron beam melting process of an Arcam S12. With this system a new method of dynamic emissivity correction based on tracking the melted material is shown. This allows for the automatic application of emissivity values to previously melted areas of a layer image. This reduces the potential temperature error in the thermal image caused by incorrect emissivity values or the assumption of a single value for a whole image. Methods for determining materials properties such as porosity and tensile strength from the in situ thermal imaging are also shown. This kind of analysis from in situ images is the groundwork for allowing part properties to be tuned at build time and could remove the need for post build testing that would determine if it is suitable for use. The system was also used to image electron beam welding and gas tungsten arc welding. With the electron beam welding of dissimilar metals, the thermal images were able to show the preheating effect that the melt pool had on the materials, the suspected reason for the process’s success. For the gas tungsten arc welding process analysis methods that would predict weld quality were developed, with the aim of later integrating these into the robotic welding process. Methods for detecting the freezing point of the weld bead and tracking slag spots were developed, both of which could be used as indicators of weld quality or defects. A machine learning algorithm was also applied to images of pipe welding on this process. The aim of this was to develop an image segmentation algorithm that could be used to measure parts of the weld in process and inform other analysis, like those above

    Optimized PET module for both pixelated and monolithic scintillator crystals

    Get PDF
    [eng] Time-of-Flight Positron Emission Tomography (TOF-PET) scanners demand fast and efficient photo-sensors and scintillators coupled to fast readout electronics. Nowadays, there are two main configurations regarding the scintillator crystal geometry: the segmented or pixelated and the monolithic approach. Depending on the cost, spatial resolution and time requirements of the PET module, one can choose between one or another. The pixelated crystal is the most extensive configuration on TOF-PET scanners as the coincidence time resolution is better compared to the monolithic. On the contrary, monolithic scintillator crystals for Time-of-Flight Positron Emission Tomography (ToF-PET) are increasing in popularity this last years due to their performance potential and price in front of the commonly used segmented crystals. On one hand, monolithic blocks allows to determine 3D information of the gamma-ray interaction inside the crystal, which enables the possibility to correct the parallax error (radial astigmatism) at off-center positions within a PET scanner, resulting in an improvement of the spatial resolution of the device. On the other hand, due to the simplicity during the crystal manufacturing process as well as for the detector design, the price is reduced compared to a regular pixelated detector. The thesis starts with the use of HRFlexToT, an ASIC developed in this group, as the readout electronics for measurements with single pixelated crystals coupled to different SiPMs. These measurements show an energy linearity error of 3% and an energy resolution below 10% of the 511 keV photopeak. Single Photon Time Resolution (SPTR) measurements performed using an FBK SiPM NUV-HD (4 mm x 4 mm pixel size) and a Hamamatsu SiPM S13360-3050CS gave a 141 ps and 167 ps FWHM respectively. Coincidence Time Resolution (CTR) measurements with small cross-section pixelated crystals (LFS crystal, 3 m x 3 mm x 20 mm ) coupled to a single Hamamatsu SiPM S13360-3050CS provides a CTR of 180 ps FWHM. Shorter crystals (LSO:Ce Ca 0.4%) coupled to a Hamamatsu S13360-3050CS SiPM or FBK-NUVHD yields a CTR of 117 ps and 119 ps respectively. Then, the results with different monolithic crystals and SiPM sensors using HRFlexToT ASIC will be presented. A Lutetium Fine Silicate (LFS) of 25 mm x 25 mm x 20 mm, a small LSO:Ce Ca 0.2% of 8 mm x 8 mm x 5 mm and a Lutetium-Yttrium Oxyorthosilicate (LYSO) of 25 mm x 25 mm x 10 mm has been experimentally tested. After subtracting the TDC contribution (82 ps FWHM), a coincidence time resolution of 244 ps FWHM for the small LFS crystal and 333 ps FWHM for the largest LFS one is reported. Additionally, a novel time calibration correction method for CTR improvement that involves a pico-second pulsed laser will be detailed. In the last part of the dissertation, a new developed simulation framework that will enable the cross-optimization of the whole PET system will be explained. It takes into consideration the photon physics interaction in the scintillator crystal, the sensor response (sensor size, pixel pitch, dead area, capacitance) and the readout electronics behavior (input impedance, noise, bandwidth, summation). This framework has allowed us to study a new promising approach that will help reducing the CTR parameter by segmenting a large area SiPM into "m" smaller SiPMs and then summing them to recover all the signal spread along these smaller sensors. A 15% improvement on time resolution is expected by segmenting a 4 mm x 4 mm single sensor into 9 sensors of 1.3 mm x 1.3 mm with respect to the case where no segmentation is applied.[cat] Aquesta tesi tenia com a objectiu la fabricació i avaluació d'un prototip per a detecció de fotons gamma en aplicació per imatge mèdica, més concretament en Tomografia per Emissió de Positrons amb mesura de temps de vol (TOF-PET). L'avaluació del mòdul va començar fent una caracterització completa del chip (ASIC) anomenat HRFlexToT, una versió nova i millorada de l'antic chip FlexToT, desenvolupat i fabricat pel grup de la Unitat Tecnològica del ICC de la Universitat de Barcelona. Aquesta avaluació inicial del chip compren des de la comprovació de les funcionalitats bàsiques fins a la generació d'un test automàtic per generar les gràfiques de linealitat corresponents durant el test elèctric. Un cop donat per bo, es va muntar en una placa demostrada, també ideada per l'equip d'enginyers del grup, i ja quedava llesta per realitzar les mesures pertinents. Tot seguit, es varen realitzar les mesures òptiques, que incloïa mesures de Singe Photon Time Resolution (SPTR) i de Coincidence Time Resolution (CTR). Aquest valors actuen com a figures de mèrit a l'hora de comparar les prestacions amb d'altres ASICs competidors del HRFlexToT. Es van obtenir valors de 60 ps de resposta pel que respecta al SPTR i de 115 ps de CTR en cristalls segmentats, una millora entorn del 20-30% respecte a la versió predecessora del chip. Aquests valors mostren ser el límit de l'estat de l'art actual i amb aquesta idea es van començar a fer altres mesures, en aquest cas amb cristall monolítics, blocs grans llegits per diversos fotosensors de les empreses Hamamatsu i FBK. Per altra banda, es va provar el funcionament del ASIC en configuració anomenada monolítica, on el cristall centellejador s'utilitza en blocs grans en coptes d’emprar cristalls segmentats, això abarateix el cost total del detector. Aquesta configuració degrada les propietats de CTR, un paràmetre crític a l'hora de tenir un producte bo i eficient. S’han obtingut mesures de 250 ps de CTR amb aquesta configuració, d’on es pot dir que l’HRFlexToT es trobar a l’estat de l’art de la tecnologia electrònica dedicada a TOF-PET amb cristalls segmentats i monolítics. Finalment, es va desenvolupar una nova eina simulació que consisteix en un sistema híbrid entre un simulador físic i un electrònic per tal de tenir una idea del comportament complet del mòdul detector. Una solució que ningú havia provat fins ara o que no es pot trobar en la literatura

    Belle II Technical Design Report

    Full text link
    The Belle detector at the KEKB electron-positron collider has collected almost 1 billion Y(4S) events in its decade of operation. Super-KEKB, an upgrade of KEKB is under construction, to increase the luminosity by two orders of magnitude during a three-year shutdown, with an ultimate goal of 8E35 /cm^2 /s luminosity. To exploit the increased luminosity, an upgrade of the Belle detector has been proposed. A new international collaboration Belle-II, is being formed. The Technical Design Report presents physics motivation, basic methods of the accelerator upgrade, as well as key improvements of the detector.Comment: Edited by: Z. Dole\v{z}al and S. Un

    Design of a Miniature Camera System for Interior Vision Automotive Application

    Get PDF
    The purpose of this thesis is to describe the design process, goals, and analysis of the interior vision camera for a driver monitoring system. The design includes minimizing the overall footprint of the system by utilizing smaller more precise optics, as well as higher quantum efficiency (QE) image sensor technologies and packaging. As a result of this research, prototype cameras are constructed, and performance was analyzed. The analysis shows that Modulation Transfer Function (MTF) performance is stable at extreme hot and cold temperatures, while the cost is mitigated by using all plastic lens elements. New high QE image sensors are a potential improvement to this design. The mechanical part of the design has resulted in the filing of three different patents. The first patent was the athermalization spacer itself for automotive applications. The second patent was the way the lens barrel interacts with the athermalization piece. The third patent was the way the imager assembly accommodates the same Bill Of Material (BOM) components and different customer requirement angles

    Digital Image Processing

    Get PDF
    This book presents several recent advances that are related or fall under the umbrella of 'digital image processing', with the purpose of providing an insight into the possibilities offered by digital image processing algorithms in various fields. The presented mathematical algorithms are accompanied by graphical representations and illustrative examples for an enhanced readability. The chapters are written in a manner that allows even a reader with basic experience and knowledge in the digital image processing field to properly understand the presented algorithms. Concurrently, the structure of the information in this book is such that fellow scientists will be able to use it to push the development of the presented subjects even further

    The laser mirror alignment system for the LHCb RICH detectors

    Get PDF
    The Large Hadron Collider beauty (LHCb) experiment at the Large Hadron Collider (CERN), is the next generation B physics experiment designed to precisely constrain the Cabibbo-Kobayashi-Maskawa (CKM) matrix measurements with unprecedented accuracy, as well as search for new physics. The success of the LHCb experiment relies upon excellent particle identification. The central particle identification detectors for the LHCb experiment are the Ring Imaging Cherenkov (RICH) detectors which are reliant upon their optics being well aligned. The optical specifications for the second RICH detector (RICH2) are for the mirrors to be aligned to within 0.1 mrad so as not to degrade the inherent 0.7 mrad resolution of the detector. As the mirrors move out of alignment over time, the performance of the RICH will deteriorate, unless corrected. This thesis describes the design and characterisation of the Laser Mirror Alignment Monitoring System and its image analysis software for selected mirrors of RICH2. This thesis also describes the results of a unique method of combining data from the Laser Mirror Alignment Monitoring System and Tracking system, to recover the positions of all mirror segments in the RICH2 detector. The laser alignment monitoring system resolution has been measured to be 0.013 mrad for both θy and θx rotations, with a long term stability of 0.014 mrad in θ­­­­y and 0.006 mrad in θx. The resolution of the final mirror alignment procedure using data tracks is 0.18 mrad for θy mirror rotations and 0.12 mrad for θx mirror rotations

    Colour depth-from-defocus incorporating experimental point spread function measurements

    Get PDF
    Depth-From-Defocus (DFD) is a monocular computer vision technique for creating depth maps from two images taken on the same optical axis with different intrinsic camera parameters. A pre-processing stage for optimally converting colour images to monochrome using a linear combination of the colour planes has been shown to improve the accuracy of the depth map. It was found that the first component formed using Principal Component Analysis (PCA) and a technique to maximise the signal-to-noise ratio (SNR) performed better than using an equal weighting of the colour planes with an additive noise model. When the noise is non-isotropic the Mean Square Error (MSE) of the depth map by maximising the SNR was improved by 7.8 times compared to an equal weighting and 1.9 compared to PCA. The fractal dimension (FD) of a monochrome image gives a measure of its roughness and an algorithm was devised to maximise its FD through colour mixing. The formulation using a fractional Brownian motion (mm) model reduced the SNR and thus produced depth maps that were less accurate than using PCA or an equal weighting. An active DFD algorithm to reduce the image overlap problem has been developed, called Localisation through Colour Mixing (LCM), that uses a projected colour pattern. Simulation results showed that LCM produces a MSE 9.4 times lower than equal weighting and 2.2 times lower than PCA. The Point Spread Function (PSF) of a camera system models how a point source of light is imaged. For depth maps to be accurately created using DFD a high-precision PSF must be known. Improvements to a sub-sampled, knife-edge based technique are presented that account for non-uniform illumination of the light box and this reduced the MSE by 25%. The Generalised Gaussian is presented as a model of the PSF and shown to be up to 16 times better than the conventional models of the Gaussian and pillbox
    • …
    corecore