185 research outputs found
High-Magnification Digital Image Correlation Techniques for Aged Nuclear Fuel Cladding Testing
Nuclear fuel cladding in light water reactors, often made of zirconium alloys, is naturally made more brittle by exposure to the water coolant during normal reactor operation. However, this embrittlement by zirconium hydrides changes the mechanical behavior of the cladding material, affecting how it will deform and what may cause it to fail. Because the cladding already has different properties in different material directions, mechanical testing also needs to be direction specific. In addition, to understand the effects that these microscale hydride features have, measurements of deforming cladding need to be at a microscale. This dissertation describes several high-magnification innovations and advancements in digital image correlation (DIC), a non-contact method for measuring displacement and strain of test specimens during experiments. First, a high-magnification UV lens is demonstrated to be capable of DIC measurements with improved spatial resolution and at high temperatures. Second, previously developed super resolution imaging techniques are applied to DIC measurements of directional ring test specimens, again improving resolution and measurement quality. Third, image capture settings are optimized to balance a tradeoff between poor depth of field and the diffraction of light, both of which cause blurred images and poorer DIC measurements. Fourth, several test arrangements are analyzed with computer modelling to determine the best method for directional tests of the cladding. Finally, the techniques are used to perform high-magnification tension tests for hydrided ring cladding specimens
Achieving high-resolution thermal imagery in low-contrast lake surface waters by aerial remote sensing and image registration
A two-platform measurement system for realizing airborne thermography of the Lake Surface Water Temperature (LSWT) with ~0.8 m pixel resolution (sub-pixel satellite scale) is presented. It consists of a tethered Balloon Launched Imaging and Monitoring Platform (BLIMP) that records LSWT images and an autonomously operating catamaran (called ZiviCat) that measures in situ surface/near surface temperatures within the image area, thus permitting simultaneous ground-truthing of the BLIMP data. The BLIMP was equipped with an uncooled InfraRed (IR) camera. The ZiviCat was designed to measure along predefined trajectories on a lake. Since LSWT spatial variability in each image is expected to be low, a poor estimation of the common spatial and temporal noise of the IR camera (nonuniformity and shutter-based drift, respectively) leads to errors in the thermal maps obtained. Nonuniformity was corrected by applying a pixelwise two-point linear correction method based on laboratory experiments. A Probability Density Function (PDF) matching in regions of overlap between sequential images was used for the drift correction. A feature matching-based algorithm, combining blob and region detectors, was implemented to create composite thermal images, and a mean value of the overlapped images at each location was considered as a representative value of that pixel in the final map. The results indicate that a high overlapping field of view (~95%) is essential for image fusion and noise reduction over such low-contrast scenes. The in situ temperatures measured by the ZiviCat were then used for the radiometric calibration. This resulted in the generation of LSWT maps at sub-pixel satellite scale resolution that revealed spatial LSWT variability, organized in narrow streaks hundreds of meters long and coherent patches of different size, with unprecedented detail
Vision Sensors and Edge Detection
Vision Sensors and Edge Detection book reflects a selection of recent developments within the area of vision sensors and edge detection. There are two sections in this book. The first section presents vision sensors with applications to panoramic vision sensors, wireless vision sensors, and automated vision sensor inspection, and the second one shows image processing techniques, such as, image measurements, image transformations, filtering, and parallel computing
Deep learning-based improvement for the outcomes of glaucoma clinical trials
Glaucoma is the leading cause of irreversible blindness worldwide. It is a progressive optic neuropathy in which retinal ganglion cell (RGC) axon loss, probably as a consequence of damage at the optic disc, causes a loss of vision, predominantly affecting the mid-peripheral visual field (VF). Glaucoma results in a decrease in vision-related quality of life and, therefore, early detection and evaluation of disease progression rates is crucial in order to assess the risk of functional impairment and to establish sound treatment strategies. The aim of my research is to improve glaucoma diagnosis by enhancing state of the art analyses of glaucoma clinical trial outcomes using advanced analytical methods. This knowledge would also help better design and analyse clinical trials, providing evidence for re-evaluating existing medications, facilitating diagnosis and suggesting novel disease management.
To facilitate my objective methodology, this thesis provides the following contributions: (i) I developed deep learning-based super-resolution (SR) techniques for optical coherence tomography (OCT) image enhancement and demonstrated that using super-resolved images improves the statistical power of clinical trials, (ii) I developed a deep learning algorithm for segmentation of retinal OCT images, showing that the methodology consistently produces more accurate segmentations than state-of-the-art networks, (iii) I developed a deep learning framework for refining the relationship between structural and functional measurements and demonstrated that the mapping is significantly improved over previous techniques, iv) I developed a probabilistic method and demonstrated that glaucomatous disc haemorrhages are influenced by a possible systemic factor that makes both eyes bleed simultaneously. v) I recalculated VF slopes, using the retinal never fiber layer thickness (RNFLT) from the super-resolved OCT as a Bayesian prior and demonstrated that use of VF rates with the Bayesian prior as the outcome measure leads to a reduction in the sample size required to distinguish treatment arms in a clinical trial
Algorithms for the enhancement of dynamic range and colour constancy of digital images & video
One of the main objectives in digital imaging is to mimic the capabilities of the human eye, and perhaps, go beyond in certain aspects. However, the human visual system is so versatile, complex, and only partially understood that no up-to-date imaging technology has been able to accurately reproduce the capabilities of the it. The extraordinary capabilities of the human eye have become a crucial shortcoming in digital imaging, since digital photography, video recording, and computer vision applications have continued to demand more realistic and accurate imaging reproduction and analytic capabilities.
Over decades, researchers have tried to solve the colour constancy problem, as well as extending the dynamic range of digital imaging devices by proposing a number of algorithms and instrumentation approaches. Nevertheless, no unique solution has been identified; this is partially due to the wide range of computer vision applications that require colour constancy and high dynamic range imaging, and the complexity of the human visual system to achieve effective colour constancy and dynamic range capabilities.
The aim of the research presented in this thesis is to enhance the overall image quality within an image signal processor of digital cameras by achieving colour constancy and extending dynamic range capabilities. This is achieved by developing a set of advanced image-processing algorithms that are robust to a number of practical challenges and feasible to be implemented within an image signal processor used in consumer electronics imaging devises.
The experiments conducted in this research show that the proposed algorithms supersede state-of-the-art methods in the fields of dynamic range and colour constancy. Moreover, this unique set of image processing algorithms show that if they are used within an image signal processor, they enable digital camera devices to mimic the human visual system s dynamic range and colour constancy capabilities; the ultimate goal of any state-of-the-art technique, or commercial imaging device
Multi-Frame Superresolution Optical Coherence Tomography for High Lateral Resolution 3D Imaging
We report that high lateral resolution and high image quality optical coherence tomography (OCT) imaging can be achieved by the multi-frame superresolution technique. With serial sets of slightly lateral shifted low resolution C-scans, our multi-frame superresolution processing of these special sets at each depth layer can reconstruct a higher resolution and quality lateral image. Layer by layer repeat processing yields an overall high lateral resolution and quality 3D image. In theory, the superresolution with a subsequent deconvolution processing could break the diffraction limit as well as suppress the background noise. In experiment, about three times lateral resolution improvement has been verified from 24.8 to 7.81 μm and from 7.81 to 2.19 μm with the sample arm optics of 0.015 and 0.05 numerical apertures, respectively, as well as the image quality doubling in dB unit. The improved lateral resolution for 3D imaging of microstructures has been observed. We also demonstrated that the improved lateral resolution and image quality could further help various machine vision algorithms sensitive to resolution and noise. In combination with our previous work, an ultra-wide field-of-view and high resolution OCT has been implemented for static non-medical applications. For in vivo 3D OCT imaging, high quality 3D subsurface live fingerprint images have been obtained within a short scan time, showing beautiful and clear distribution of eccrine sweat glands and internal fingerprint layer, overcoming traditional 2D fingerprint reader and benefiting important biometric security applications
Recommended from our members
Camera positioning for 3D panoramic image rendering
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University London.Virtual camera realisation and the proposition of trapezoidal camera architecture are the two broad contributions of this thesis. Firstly, multiple camera and their arrangement constitute a critical component which affect the integrity of visual content acquisition for multi-view video. Currently, linear, convergence, and divergence arrays are the prominent camera topologies adopted. However, the large number of cameras required and their synchronisation are two of prominent challenges usually encountered. The use of virtual cameras can significantly reduce the number of physical cameras used with respect to any of the known
camera structures, hence adequately reducing some of the other implementation issues. This thesis explores to use image-based rendering with and without geometry in the implementations leading to the realisation of virtual cameras. The virtual camera implementation was carried out from the perspective of depth map (geometry) and use of multiple image samples (no geometry). Prior to the virtual camera realisation, the generation of depth map was investigated using region match measures widely known for solving image point correspondence problem. The constructed depth maps have been compare with the ones generated
using the dynamic programming approach. In both the geometry and no geometry approaches, the virtual cameras lead to the rendering of views from a textured depth map, construction of 3D panoramic image of a scene by stitching multiple image samples and performing superposition on them, and computation
of virtual scene from a stereo pair of panoramic images. The quality of these rendered images were assessed through the use of either objective or subjective analysis in Imatest software. Further more, metric reconstruction of a scene was performed by re-projection of the pixel points from multiple image samples with
a single centre of projection. This was done using sparse bundle adjustment algorithm. The statistical summary obtained after the application of this algorithm provides a gauge for the efficiency of the optimisation step. The optimised data was then visualised in Meshlab software environment, hence providing the reconstructed scene. Secondly, with any of the well-established camera arrangements, all cameras are usually constrained to the same horizontal plane. Therefore, occlusion becomes an extremely challenging problem, and a robust camera set-up is required in order to resolve strongly the hidden part of any scene objects.
To adequately meet the visibility condition for scene objects and given that occlusion of the same scene objects can occur, a multi-plane camera structure is highly desirable. Therefore, this thesis also explore trapezoidal camera structure for image acquisition. The approach here is to assess the feasibility and potential
of several physical cameras of the same model being sparsely arranged on the edge of an efficient trapezoid graph. This is implemented both Matlab and Maya. The quality of the depth maps rendered in Matlab are better in Quality
Non-parametric Methods for Automatic Exposure Control, Radiometric Calibration and Dynamic Range Compression
Imaging systems are essential to a wide range of modern day
applications. With the continuous advancement in imaging systems,
there is an on-going need to adapt and improve the imaging
pipeline running inside the imaging systems.
In this thesis, methods are presented to improve the imaging
pipeline of digital cameras. Here we present three methods to
improve important phases of the imaging process, which are (i)
``Automatic exposure adjustment'' (ii) ``Radiometric
calibration'' (iii) ''High dynamic range compression''. These
contributions touch the initial, intermediate and final stages of
imaging pipeline of digital cameras.
For exposure control, we propose two methods. The first makes use
of CCD-based equations to formulate the exposure control problem.
To estimate the exposure time, an initial image was acquired for
each wavelength channel to which contrast adjustment techniques
were applied. This helps to recover a reference cumulative
distribution function of image brightness at each channel. The
second method proposed for automatic exposure control is an
iterative method applicable for a broad range of imaging systems.
It uses spectral sensitivity functions such as the photopic
response functions for the generation of a spectral power image
of the captured scene. A target image is then generated using the
spectral power image by applying histogram equalization. The
exposure time is hence calculated iteratively by minimizing the
squared difference between target and the current spectral power
image. Here we further analyze the method by performing its
stability and controllability analysis using a state space
representation used in control theory. The applicability of the
proposed method for exposure time calculation was shown on real
world scenes using cameras with varying architectures.
Radiometric calibration is the estimate of the non-linear mapping
of the input radiance map to the output brightness values. The
radiometric mapping is represented by the camera response
function with which the radiance map of the scene is estimated.
Our radiometric calibration method employs an L1 cost function by
taking advantage of Weisfeld optimization scheme. The proposed
calibration works with multiple input images of the scene with
varying exposure. It can also perform calibration using a single
input with few constraints. The proposed method outperforms,
quantitatively and qualitatively, various alternative methods
found in the literature of radiometric calibration.
Finally, to realistically represent the estimated radiance maps
on low dynamic range display (LDR) devices, we propose a method
for dynamic range compression. Radiance maps generally have
higher dynamic range (HDR) as compared to the widely used display
devices. Thus, for display purposes, dynamic range compression is
required on HDR images. Our proposed method generates few LDR
images from the HDR radiance map by clipping its values at
different exposures. Using contrast information of each LDR
image generated, the method uses an energy minimization approach
to estimate the probability map of each LDR image. These
probability maps are then used as label set to form final
compressed dynamic range image for the display device. The
results of our method were compared qualitatively and
quantitatively with those produced by widely cited and
professionally used methods
Optical Coherence Tomography and Its Non-medical Applications
Optical coherence tomography (OCT) is a promising non-invasive non-contact 3D imaging technique that can be used to evaluate and inspect material surfaces, multilayer polymer films, fiber coils, and coatings. OCT can be used for the examination of cultural heritage objects and 3D imaging of microstructures. With subsurface 3D fingerprint imaging capability, OCT could be a valuable tool for enhancing security in biometric applications. OCT can also be used for the evaluation of fastener flushness for improving aerodynamic performance of high-speed aircraft. More and more OCT non-medical applications are emerging. In this book, we present some recent advancements in OCT technology and non-medical applications
- …