129 research outputs found

    Automated microscopic analysis of optical fibre transmission surfaces

    Get PDF
    Outlined in this thesis is the design of a prototype device for the inspection of optical fibre endfaces. The device designed uses lenses with different magnification’s to acquire scaled microscopic images of the endfaces for analysis purposes. The design specifications for the device are established based on the optical transmission requirements of the fibres and the impact of defects on transmission losses in various regions of the optical fibre endface. The specifications of this device are as follows: • Optical System o 3 lens automated changeover • Imaging System: o Minimum Resolvable Object Size of 2.43j.im o Maximum Field of View of 0.9mm o Resolution of 740 X 560 pixels • Autofocus System with Focus Resolution of 1.25pm • Coaxial Illumination System • 12Mbits/sec USB video acquisition hardware The device designed realises all the mechanical, optical and electronic functionality required to automate the inspection process of optical fibres. The hardware and software challenges involved in designing and building the prototype are discussed in detail in the thesis. A complete evaluation of the design is also carried out, difficulties and problems that occurred with the project are analysed, and recommendations for the improvement of the design are made

    Resolving Measurement Errors Inherent with Time-of-Flight Range Imaging Cameras

    Get PDF
    Range imaging cameras measure the distance to objects in the field-of-view (FoV) of the camera, these cameras enable new machine vision applications in robotics, manufacturing, and human computer interaction. Time-of-flight (ToF) range cameras operate by illuminating the scene with amplitude modulated continuous wave (AMCW) light and measuring the phase difference between the emitted and reflected modulation envelope. Currently ToF range cameras suffer from measurement errors that are highly scene dependent, and these errors limit the accuracy of the depth measurement. The major cause of measurement errors is multiple propagation paths from the light source to pixel, known as multi path interference. Multi-path interference typically arises from: inter reflections, lens flare, subsurface scattering, volumetric scattering, and translucent objects. This thesis contributes three novel methods for resolving multi-path interference: coding in time, coding in frequency, and coding in space. Time coding is implemented by replacing the single frequency amplitude modulation with a binary sequence. Fundamental to ToF range cameras is the cross-correlation between the reflected light and a reference signal. The measured cross-correlation depends on the selection of the binary sequence. With selection of an appropriate binary sequence and using sparse deconvolution on the measured cross-correlation the multiple return path lengths and their amplitudes can be recovered. However, the minimal resolvable path length is dependent on the highest frequency in the binary sequence. Frequency coding is implemented by taking multiple measurements at different modulation frequencies. A subset of frequency coding is operating the camera in a mode analogous to stepped frequency continuous wave (SFCW). Frequency coding uses techniques from radar to resolve multiple propagation paths. The minimal resolvable path length is dependent on the camera's modulation bandwidth and the spectrum estimation technique used to recover distance, and it is shown that SFCW can be used to measure depth of objects behind a translucent sheet, while AMCW measurements can not. Path lengths below quarter a wavelength of the highest modulation frequency are difficult to resolve. The use of spatial coding is used to resolve diffuse multi-path interference. The original technique comes from direct and global separation in computer graphics, and it is modified to operate on the complex data produced by a ToF range camera. By illuminating the scene with a pattern the illuminated areas contain the direct return and the scattering (global return). The non-illuminated regions contain the scattering return, assuming the global component is spatially smooth. The direct and global separation with sinusoidal patterns is combining with the sinusoidal modulation signal of ToF range cameras for a closed form solution to multi-path interference in nine frames. With nine raw frames it is possible to implement direct and global separation at video frame rates. The RMSE of a corner is reduced from 0.0952 m to 0.0112 m. Direct and global separation correctly measures the depth of a diffuse corner, and resolves subsurface scattering however fails to resolve specular reflections. Finally the direct and global separation is combined with replacing the illumination and reference signals with a binary sequence. The combination allows for resolving diffuse multi-path interference present in a corner, with the sparse multi-path interference caused mixed pixels between the foreground and background. The corner is correctly measured and the number of mixed pixels is reduced by 90%. With the development of new methods to resolve multi-path interference ToF range cameras can measure scenes with more confidence. ToF range cameras can be built into small form factors as they require a small number of parts: a pixel array, a light source and a lens. The small form factor coupled with accurate range measurements allows ToF range cameras to be embedded in cellphones and consumer electronic devices, enabling wider adoption and advantages over competing range imaging technologies

    Measuring and simulating haemodynamics due to geometric changes in facial expression

    Get PDF
    The human brain has evolved to be very adept at recognising imperfections in human skin. In particular, observing someone’s facial skin appearance is important in recognising when someone is ill, or when finding a suitable mate. It is therefore a key goal of computer graphics research to produce highly realistic renderings of skin. However, the optical processes that give rise to skin appearance are complex and subtle. To address this, computer graphics research has incorporated more and more sophisticated models of skin reflectance. These models are generally based on static concentrations of skin chromophores; melanin and haemoglobin. However, haemoglobin concentrations are far from static, as blood flow is directly caused by both changes in facial expression and emotional state. In this thesis, we explore how blood flow changes as a consequence of changing facial expression with the aim of producing more accurate models of skin appearance. To build an accurate model of blood flow, we base it on real-world measurements of blood concentrations over time. We describe, in detail, the steps required to obtain blood concentrations from photographs of a subject. These steps are then used to measure blood concentration maps for a series of expressions that define a wide gamut of human expression. From this, we define a blending algorithm that allows us to interpolate these maps to generate concentrations for other expressions. This technique, however, requires specialist equipment to capture the maps in the first place. We try to rectify this problem by investigating a direct link between changes in facial geometry and haemoglobin concentrations. This requires building a unique capture device that captures both simultaneously. Our analysis hints a direct linear connection between the two, paving the way for further investigatio

    SoDaCam: Software-defined Cameras via Single-Photon Imaging

    Full text link
    Reinterpretable cameras are defined by their post-processing capabilities that exceed traditional imaging. We present "SoDaCam" that provides reinterpretable cameras at the granularity of photons, from photon-cubes acquired by single-photon devices. Photon-cubes represent the spatio-temporal detections of photons as a sequence of binary frames, at frame-rates as high as 100 kHz. We show that simple transformations of the photon-cube, or photon-cube projections, provide the functionality of numerous imaging systems including: exposure bracketing, flutter shutter cameras, video compressive systems, event cameras, and even cameras that move during exposure. Our photon-cube projections offer the flexibility of being software-defined constructs that are only limited by what is computable, and shot-noise. We exploit this flexibility to provide new capabilities for the emulated cameras. As an added benefit, our projections provide camera-dependent compression of photon-cubes, which we demonstrate using an implementation of our projections on a novel compute architecture that is designed for single-photon imaging.Comment: Accepted at ICCV 2023 (oral). Project webpage can be found at https://wisionlab.com/project/sodacam

    Miniaturized embedded stereo vision system (MESVS)

    Get PDF
    Stereo vision is one of the fundamental problems of computer vision. It is also one of the oldest and heavily investigated areas of 3D vision. Recent advances of stereo matching methodologies and availability of high performance and efficient algorithms along with availability of fast and affordable hardware technology, have allowed researchers to develop several stereo vision systems capable of operating at real-time. Although a multitude of such systems exist in the literature, the majority of them concentrates only on raw performance and quality rather than factors such as dimension, and power requirement, which are of significant importance in the embedded settings. In this thesis a new miniaturized embedded stereo vision system (MESVS) is presented, which is miniaturized to fit within a package of 5x5cm, is power efficient, and cost-effective. Furthermore, through application of embedded programming techniques and careful optimization, MESVS achieves the real-time performance of 20 frames per second. This work discusses the various challenges involved regarding design and implementation of this system and the measures taken to tackle them

    Analysis, Modeling and Dynamic Optimization of 3D Time-of-Flight Imaging Systems

    Get PDF
    The present thesis is concerned with the optimization of 3D Time-of-Flight (ToF) imaging systems. These novel cameras determine range images by actively illuminating a scene and measuring the time until the backscattered light is detected. Depth maps are constructed from multiple raw images. Usually two of such raw images are acquired simultaneously using special correlating sensors. This thesis covers four main contributions: A physical sensor model is presented which enables the analysis and optimization of the process of raw image acquisition. This model supports the proposal of a new ToF sensor design which employs a logarithmic photo response. Due to asymmetries of the two read-out paths current systems need to acquire the raw images in multiple instances. This allows the correction of systematic errors. The present thesis proposes a method for dynamic calibration and compensation of these asymmetries. It facilitates the computation of two depth maps from a single set of raw images and thus increases the frame rate by a factor of two. Since not all required raw images are captured simultaneously motion artifacts can occur. The present thesis proposes a robust method for detection and correction of such artifacts. All proposed algorithms have a computational complexity which allowsreal-time execution even on systems with limited resources (e.g. embeddedsystems). The algorithms are demonstrated by use of a commercial ToF camera

    CMOS Image Sensors in Surveillance System Applications

    Get PDF
    Recent technology advances in CMOS image sensors (CIS) enable their utilization in the most demanding of surveillance fields, especially visual surveillance and intrusion detection in intelligent surveillance systems, aerial surveillance in war zones, Earth environmental surveillance by satellites in space monitoring, agricultural monitoring using wireless sensor networks and internet of things and driver assistance in automotive fields. This paper presents an overview of CMOS image sensor-based surveillance applications over the last decade by tabulating the design characteristics related to image quality such as resolution, frame rate, dynamic range, signal-to-noise ratio, and also processing technology. Different models of CMOS image sensors used in all applications have been surveyed and tabulated for every year and application.https://doi.org/10.3390/s2102048

    Amorphous silicon e 3D sensors applied to object detection

    Get PDF
    Nowadays, existing 3D scanning cameras and microscopes in the market use digital or discrete sensors, such as CCDs or CMOS for object detection applications. However, these combined systems are not fast enough for some application scenarios since they require large data processing resources and can be cumbersome. Thereby, there is a clear interest in exploring the possibilities and performances of analogue sensors such as arrays of position sensitive detectors with the final goal of integrating them in 3D scanning cameras or microscopes for object detection purposes. The work performed in this thesis deals with the implementation of prototype systems in order to explore the application of object detection using amorphous silicon position sensors of 32 and 128 lines which were produced in the clean room at CENIMAT-CEMOP. During the first phase of this work, the fabrication and the study of the static and dynamic specifications of the sensors as well as their conditioning in relation to the existing scientific and technological knowledge became a starting point. Subsequently, relevant data acquisition and suitable signal processing electronics were assembled. Various prototypes were developed for the 32 and 128 array PSD sensors. Appropriate optical solutions were integrated to work together with the constructed prototypes, allowing the required experiments to be carried out and allowing the achievement of the results presented in this thesis. All control, data acquisition and 3D rendering platform software was implemented for the existing systems. All these components were combined together to form several integrated systems for the 32 and 128 line PSD 3D sensors. The performance of the 32 PSD array sensor and system was evaluated for machine vision applications such as for example 3D object rendering as well as for microscopy applications such as for example micro object movement detection. Trials were also performed involving the 128 array PSD sensor systems. Sensor channel non-linearities of approximately 4 to 7% were obtained. Overall results obtained show the possibility of using a linear array of 32/128 1D line sensors based on the amorphous silicon technology to render 3D profiles of objects. The system and setup presented allows 3D rendering at high speeds and at high frame rates. The minimum detail or gap that can be detected by the sensor system is approximately 350 Îźm when using this current setup. It is also possible to render an object in 3D within a scanning angle range of 15Âş to 85Âş and identify its real height as a function of the scanning angle and the image displacement distance on the sensor. Simple and not so simple objects, such as a rubber and a plastic fork, can be rendered in 3D properly and accurately also at high resolution, using this sensor and system platform. The nip structure sensor system can detect primary and even derived colors of objects by a proper adjustment of the integration time of the system and by combining white, red, green and blue (RGB) light sources. A mean colorimetric error of 25.7 was obtained. It is also possible to detect the movement of micrometer objects using the 32 PSD sensor system. This kind of setup offers the possibility to detect if a micro object is moving, what are its dimensions and what is its position in two dimensions, even at high speeds. Results show a non-linearity of about 3% and a spatial resolution of < 2Âľm
    • …
    corecore