48 research outputs found

    Cross-Spectral Face Recognition Between Near-Infrared and Visible Light Modalities.

    Get PDF
    In this thesis, improvement of face recognition performance with the use of images from the visible (VIS) and near-infrared (NIR) spectrum is attempted. Face recognition systems can be adversely affected by scenarios which encounter a significant amount of illumination variation across images of the same subject. Cross-spectral face recognition systems using images collected across the VIS and NIR spectrum can counter the ill-effects of illumination variation by standardising both sets of images. A novel preprocessing technique is proposed, which attempts the transformation of faces across both modalities to a feature space with enhanced correlation. Direct matching across the modalities is not possible due to the inherent spectral differences between NIR and VIS face images. Compared to a VIS light source, NIR radiation has a greater penetrative depth when incident on human skin. This fact, in addition to the greater number of scattering interactions within the skin by rays from the NIR spectrum can alter the morphology of the human face enough to disable a direct match with the corresponding VIS face. Several ways to bridge the gap between NIR-VIS faces have been proposed previously. Mostly of a data-driven approach, these techniques include standardised photometric normalisation techniques and subspace projections. A generative approach driven by a true physical model has not been investigated till now. In this thesis, it is proposed that a large proportion of the scattering interactions present in the NIR spectrum can be accounted for using a model for subsurface scattering. A novel subsurface scattering inversion (SSI) algorithm is developed that implements an inversion approach based on translucent surface rendering by the computer graphics field, whereby the reversal of the first order effects of subsurface scattering is attempted. The SSI algorithm is then evaluated against several preprocessing techniques, and using various permutations of feature extraction and subspace projection algorithms. The results of this evaluation show an improvement in cross spectral face recognition performance using SSI over existing Retinex-based approaches. The top performing combination of an existing photometric normalisation technique, Sequential Chain, is seen to be the best performing with a Rank 1 recognition rate of 92. 5%. In addition, the improvement in performance using non-linear projection models shows an element of non-linearity exists in the relationship between NIR and VIS

    NOVEL TECHNOLOGIES AND APPLICATIONS FOR FLUORESCENT LAMINAR OPTICAL TOMOGRAPHY

    Get PDF
    Laminar optical tomography (LOT) is a mesoscopic three-dimensional (3D) optical imaging technique that can achieve both a resolution of 100-200 µm and a penetration depth of 2-3 mm based either on absorption or fluorescence contrast. Fluorescence laminar optical tomography (FLOT) can also provide large field-of-view (FOV) and high acquisition speed. All of these advantages make FLOT suitable for 3D depth-resolved imaging in tissue engineering, neuroscience, and oncology. In this study, by incorporating the high-dynamic-range (HDR) method widely used in digital cameras, we presented the HDR-FLOT. HDR-FLOT can moderate the limited dynamic range of the charge-coupled device-based system in FLOT and thus increase penetration depth and improve the ability to image fluorescent samples with a large concentration difference. For functional mapping of brain activities, we applied FLOT to record 3D neural activities evoked in the whisker system of mice by deflection of a single whisker in vivo. We utilized FLOT to investigate the cell viability, migration, and bone mineralization within bone tissue engineering scaffolds in situ, which allows depth-resolved molecular characterization of engineered tissues in 3D. Moreover, we investigated the feasibility of the multi-modal optical imaging approach including high-resolution optical coherence tomography (OCT) and high-sensitivity FLOT for structural and molecular imaging of colon tumors, which has demonstrated more accurate diagnosis with 88.23% (82.35%) for sensitivity (specificity) compared to either modality alone. We further applied the multi-modal imaging system to monitor the drug distribution and therapeutic effects during and after Photo-immunotherapy (PIT) in situ and in vivo, which is a novel low-side-effect targeted cancer therapy. A minimally-invasive two-channel fluorescence fiber bundle imaging system and a two-photon microscopy system combined with a micro-prism were also developed to verify the results

    Real-time tissue viability assessment using near-infrared light

    Get PDF
    Despite significant advances in medical imaging technologies, there currently exist no tools to effectively assist healthcare professionals during surgical procedures. In turn, procedures remain subjective and dependent on experience, resulting in avoidable failure and significant quality of care disparities across hospitals. Optical techniques are gaining popularity in clinical research because they are low cost, non-invasive, portable, and can retrieve both fluorescence and endogenous contrast information, providing physiological information relative to perfusion, oxygenation, metabolism, hydration, and sub-cellular content. Near-infrared (NIR) light is especially well suited for biological tissue and does not cause tissue damage from ionizing radiation or heat. My dissertation has been focused on developing rapid imaging techniques for mapping endogenous tissue constituents to aid surgical guidance. These techniques allow, for the first time, video-rate quantitative acquisition over a large field of view (> 100 cm2) in widefield and endoscopic implementations. The optical system analysis has been focused on the spatial-frequency domain for its ease of quantitative measurements over large fields of view and for its recent development in real-time acquisition, single snapshot of optical properties (SSOP) imaging. Using these methods, this dissertation provides novel improvements and implementations to SSOP, including both widefield and endoscopic instrumentations capable of video-rate acquisition of optical properties and sample surface profile maps. In turn, these measures generate profile-corrected maps of hemoglobin concentration that are highly beneficial for perfusion and overall tissue viability. Also utilizing optical property maps, a novel technique for quantitative fluorescence imaging was also demonstrated, showing large improvement over standard and ratiometric methods. To enable real-time feedback, rapid processing algorithms were designed using lookup tables that provide a 100x improvement in processing speed. Finally, these techniques were demonstrated in vivo to investigate their ability for early detection of tissue failure due to ischemia. Both pre-clinical studies show endogenous contrast imaging can provide early measures of future tissue viability. The goal of this work has been to provide the foundation for real-time imaging systems that provide tissue constituent quantification for tissue viability assessments.2018-01-09T00:00:00

    OPTICAL NAVIGATION TECHNIQUES FOR MINIMALLY INVASIVE ROBOTIC SURGERIES

    Get PDF
    Minimally invasive surgery (MIS) involves small incisions in a patient's body, leading to reduced medical risk and shorter hospital stays compared to open surgeries. For these reasons, MIS has experienced increased demand across different types of surgery. MIS sometimes utilizes robotic instruments to complement human surgical manipulation to achieve higher precision than can be obtained with traditional surgeries. Modern surgical robots perform within a master-slave paradigm, in which a robotic slave replicates the control gestures emanating from a master tool manipulated by a human surgeon. Presently, certain human errors due to hand tremors or unintended acts are moderately compensated at the tool manipulation console. However, errors due to robotic vision and display to the surgeon are not equivalently addressed. Current vision capabilities within the master-slave robotic paradigm are supported by perceptual vision through a limited binocular view, which considerably impacts the hand-eye coordination of the surgeon and provides no quantitative geometric localization for robot targeting. These limitations lead to unexpected surgical outcomes, and longer operating times compared to open surgery. To improve vision capabilities within an endoscopic setting, we designed and built several image guided robotic systems, which obtained sub-millimeter accuracy. With this improved accuracy, we developed a corresponding surgical planning method for robotic automation. As a demonstration, we prototyped an autonomous electro-surgical robot that employed quantitative 3D structural reconstruction with near infrared registering and tissue classification methods to localize optimal targeting and suturing points for minimally invasive surgery. Results from validation of the cooperative control and registration between the vision system in a series of in vivo and in vitro experiments are presented and the potential enhancement to autonomous robotic minimally invasive surgery by utilizing our technique will be discussed

    Assessment and optimisation of 3D optical topography for brain imaging

    Get PDF
    Optical topography has recently evolved into a widespread research tool for non-invasively mapping blood flow and oxygenation changes in the adult and infant cortex. The work described in this thesis has focused on assessing the potential and limitations of this imaging technique, and developing means of obtaining images which are less artefactual and more quantitatively accurate. Due to the diffusive nature of biological tissue, the image reconstruction is an ill-posed problem, and typically under-determined, due to the limited number of optodes (sources and detectors). The problem must be regularised in order to provide meaningful solutions, and requires a regularisation parameter (\lambda), which has a large influence on the image quality. This work has focused on three-dimensional (3D) linear reconstruction using zero-order Tikhonov regularisation and analysis of different methods to select the regularisation parameter. The methods are summarised and applied to simulated data (deblurring problem) and experimental data obtained with the University College London (UCL) optical topography system. This thesis explores means of optimising the reconstruction algorithm to increase imaging performance by using spatially variant regularisation. The sensitivity and quantitative accuracy of the method is investigated using measurements on tissue-equivalent phantoms. Our optical topography system is based on continuous-wave (CW) measurements, and conventional image reconstruction methods cannot provide unique solutions, i.e., cannot separate tissue absorption and scattering simultaneously. Improved separation between absorption and scattering and between the contributions of different chromophores can be obtained by using multispectral image reconstruction. A method is proposed to select the optimal wavelength for optical topography based on the multispectral method that involves determining which wavelengths have overlapping sensitivities. Finally, we assess and validate the new three-dimensional imaging tools using in vivo measurements of evoked response in the infant brain

    A Deep Learning Framework in Selected Remote Sensing Applications

    Get PDF
    The main research topic is designing and implementing a deep learning framework applied to remote sensing. Remote sensing techniques and applications play a crucial role in observing the Earth evolution, especially nowadays, where the effects of climate change on our life is more and more evident. A considerable amount of data are daily acquired all over the Earth. Effective exploitation of this information requires the robustness, velocity and accuracy of deep learning. This emerging need inspired the choice of this topic. The conducted studies mainly focus on two European Space Agency (ESA) missions: Sentinel 1 and Sentinel 2. Images provided by the ESA Sentinel-2 mission are rapidly becoming the main source of information for the entire remote sensing community, thanks to their unprecedented combination of spatial, spectral and temporal resolution, as well as their open access policy. The increasing interest gained by these satellites in the research laboratory and applicative scenarios pushed us to utilize them in the considered framework. The combined use of Sentinel 1 and Sentinel 2 is crucial and very prominent in different contexts and different kinds of monitoring when the growing (or changing) dynamics are very rapid. Starting from this general framework, two specific research activities were identified and investigated, leading to the results presented in this dissertation. Both these studies can be placed in the context of data fusion. The first activity deals with a super-resolution framework to improve Sentinel 2 bands supplied at 20 meters up to 10 meters. Increasing the spatial resolution of these bands is of great interest in many remote sensing applications, particularly in monitoring vegetation, rivers, forests, and so on. The second topic of the deep learning framework has been applied to the multispectral Normalized Difference Vegetation Index (NDVI) extraction, and the semantic segmentation obtained fusing Sentinel 1 and S2 data. The S1 SAR data is of great importance for the quantity of information extracted in the context of monitoring wetlands, rivers and forests, and many other contexts. In both cases, the problem was addressed with deep learning techniques, and in both cases, very lean architectures were used, demonstrating that even without the availability of computing power, it is possible to obtain high-level results. The core of this framework is a Convolutional Neural Network (CNN). {CNNs have been successfully applied to many image processing problems, like super-resolution, pansharpening, classification, and others, because of several advantages such as (i) the capability to approximate complex non-linear functions, (ii) the ease of training that allows to avoid time-consuming handcraft filter design, (iii) the parallel computational architecture. Even if a large amount of "labelled" data is required for training, the CNN performances pushed me to this architectural choice.} In our S1 and S2 integration task, we have faced and overcome the problem of manually labelled data with an approach based on integrating these two different sensors. Therefore, apart from the investigation in Sentinel-1 and Sentinel-2 integration, the main contribution in both cases of these works is, in particular, the possibility of designing a CNN-based solution that can be distinguished by its lightness from a computational point of view and consequent substantial saving of time compared to more complex deep learning state-of-the-art solutions

    Assessment of new real-time in-situ optical coherence tomography instrumentation and techniques for diagnosing and monitoring oral and cutaneous lesions

    Get PDF
    Head and neck cancer is the sixth most common cancer worldwide, with 686,328 new cases per year. Most head and neck cancers are squamous cell carcinomas of the oral cavity and oropharynx, and are burdened by high mortality (50% at 5 years from diagnosis), notwithstanding recent progress in treatment methods. The vast majority of oro-pharyngeal cancers are late diagnosed, with significant adverse effects on cure, morbidity and prognosis. There is general consensus that earlier diagnosis contributes to better outcome measures. Current diagnostic standards consist of clinical examination and surgical biopsy, which are associated with delayed presentation, diagnosis and greater mortality. There is an unmet need for effective diagnostic techniques to aid early identification of cancers. Optical coherence tomography (OCT) is one of a number of non-invasive real-time imaging systems, introduced during the last two decades aiming to provide tissue information similar to conventional histopathological examination. The technique is similar to a B-mode ultrasound section, but employs a scanning near infrared light source rather than ultrasound waves, generating cross-sectional images of the sample tissue in an X-Z orientation. In this study, I investigated a modified OCT oral instrument (VivoSight® Michelson Diagnostics Ltd, Orpington, Kent, UK) with adapted probe for intraoral use. The new oral instrument was not CE marked, was uncalibrated and consequently a non-standard instrument. Therefore, prior to clinical application, the new instrument required calibration and comparison with the conventional instrument to assess and confirm performance in image quality and resolution in X, Y, and Z-planes. A series of laboratory engineering standards were created and compared by scanning with both instruments in X, Y & Z planes. A second series of experiments were conducted using porcine tissue as models for human tissue, confirming the similarities of fact and artefact observable when the two instruments were applied to challenging imaging scenarios, in particular, the effects of dissimilar target tissue refractive indices on the OCT image. The effects (tissue dimensional changes) of fixing samples in formal-incontaining media and tissue processing were also then investigated using this non-invasive measuring technique

    Spatial Augmented Reality Using Structured Light Illumination

    Get PDF
    Spatial augmented reality is a particular kind of augmented reality technique that uses projector to blend the real objects with virtual contents. Coincidentally, as a means of 3D shape measurement, structured light illumination makes use of projector as part of its system as well. It uses the projector to generate important clues to establish the correspondence between the 2D image coordinate system and the 3D world coordinate system. So it is appealing to build a system that can carry out the functionalities of both spatial augmented reality and structured light illumination. In this dissertation, we present all the hardware platforms we developed and their related applications in spatial augmented reality and structured light illumination. Firstly, it is a dual-projector structured light 3D scanning system that has two synchronized projectors operate simultaneously, consequently it outperforms the traditional structured light 3D scanning system which only include one projector in terms of the quality of 3D reconstructions. Secondly, we introduce a modified dual-projector structured light 3D scanning system aiming at detecting and solving the multi-path interference. Thirdly, we propose an augmented reality face paint system which detects human face in a scene and paints the face with any favorite colors by projection. Additionally, the system incorporates a second camera to realize the 3D space position tracking by exploiting the principle of structured light illumination. At last, a structured light 3D scanning system with its own built-in machine vision camera is presented as the future work. So far the standalone camera has been completed from the a bare CMOS sensor. With this customized camera, we can achieve high dynamic range imaging and better synchronization between the camera and projector. But the full-blown system that includes HDMI transmitter, structured light pattern generator and synchronization logic has yet to be done due to the lack of a well designed high speed PCB
    corecore