121 research outputs found

    Segmentation, registration,and selective watermarking of retinal images

    Get PDF
    In this dissertation, I investigated some fundamental issues related to medical image segmentation, registration, and watermarking. I used color retinal fundus images to perform my study because of the rich representation of different objects (blood vessels, microaneurysms, hemorrhages, exudates, etc.) that are pathologically important and have close resemblance in shapes and colors. To attack this complex subject, I developed a divide-and-conquer strategy to address related issues step-by-step and to optimize the parameters of different algorithm steps. Most, if not all, objects in our discussion are related. The algorithms for detection, registration, and protection of different objects need to consider how to differentiate the foreground from the background and be able to correctly characterize the features of the image objects and their geometric properties. To address these problems, I characterized the shapes of blood vessels in retinal images and proposed the algorithms to extract the features of blood vessels. A tracing algorithm was developed for the detection of blood vessels along the vascular network. Due to the noise interference and various image qualities, the robust segmentation techniques were used for the accurate characterization of the objects shapes and verification. Based on the segmentation results, a registration algorithm was developed, which uses the bifurcation and cross-over points of blood vessels to establish the correspondence between the images and derive the transformation that aligns the images. A Region-of-Interest (ROI) based watermarking scheme was proposed for image authenticity. It uses linear segments extracted from the image as reference locations for embedding and detecting watermark. Global and locally-randomized synchronization schemes were proposed for bit-sequence synchronization of a watermark. The scheme is robust against common image processing and geometric distortions (rotation and scaling), and it can detect alternations such as moving or removing of the image content

    Simplicial Complex based Point Correspondence between Images warped onto Manifolds

    Full text link
    Recent increase in the availability of warped images projected onto a manifold (e.g., omnidirectional spherical images), coupled with the success of higher-order assignment methods, has sparked an interest in the search for improved higher-order matching algorithms on warped images due to projection. Although currently, several existing methods "flatten" such 3D images to use planar graph / hypergraph matching methods, they still suffer from severe distortions and other undesired artifacts, which result in inaccurate matching. Alternatively, current planar methods cannot be trivially extended to effectively match points on images warped onto manifolds. Hence, matching on these warped images persists as a formidable challenge. In this paper, we pose the assignment problem as finding a bijective map between two graph induced simplicial complexes, which are higher-order analogues of graphs. We propose a constrained quadratic assignment problem (QAP) that matches each p-skeleton of the simplicial complexes, iterating from the highest to the lowest dimension. The accuracy and robustness of our approach are illustrated on both synthetic and real-world spherical / warped (projected) images with known ground-truth correspondences. We significantly outperform existing state-of-the-art spherical matching methods on a diverse set of datasets.Comment: Accepted at ECCV 202

    Advanced retinal imaging: Feature extraction, 2-D registration, and 3-D reconstruction

    Get PDF
    In this dissertation, we have studied feature extraction and multiple view geometry in the context of retinal imaging. Specifically, this research involves three components, i.e., feature extraction, 2-D registration, and 3-D reconstruction. First, the problem of feature extraction is investigated. Features are significantly important in motion estimation techniques because they are the input to the algorithms. We have proposed a feature extraction algorithm for retinal images. Bifurcations/crossovers are used as features. A modified local entropy thresholding algorithm based on a new definition of co-occurrence matrix is proposed. Then, we consider 2-D retinal image registration which is the problem of the transformation of 2-D/2-D. Both linear and nonlinear models are incorporated to account for motions and distortions. A hybrid registration method has been introduced in order to take advantages of both feature-based and area-based methods have offered along with relevant decision-making criteria. Area-based binary mutual information is proposed or translation estimation. A feature-based hierarchical registration technique, which involves the affine and quadratic transformations, is developed. After that, a 3-D retinal surface reconstruction issue has been addressed. To generate a 3-D scene from 2-D images, a camera projection or transformations of 3-D/2-D techniques have been investigated. We choose an affine camera to characterize for 3-D retinal reconstruction. We introduce a constrained optimization procedure which incorporates a geometrically penalty function and lens distortion into the cost function. The procedure optimizes all of the parameters, camera's parameters, 3-D points, the physical shape of human retina, and lens distortion, simultaneously. Then, a point-based spherical fitting method is introduced. The proposed retinal imaging techniques will pave the path to a comprehensive visual 3-D retinal model for many medical applications

    Human retinal oximetry using hyperspectral imaging

    Get PDF
    The aim of the work reported in this thesis was to investigate the possibility of measuring human retinal oxygen saturation using hyperspectral imaging. A direct non-invasive quantitative mapping of retinal oxygen saturation is enabled by hyperspectral imaging whereby the absorption spectra of oxygenated and deoxygenated haemoglobin are recorded and analysed. Implementation of spectral retinal imaging thus requires ophthalmic instrumentation capable of efficiently recording the requisite spectral data cube. For this purpose, a spectral retinal imager was developed for the first time by integrating a liquid crystal tuneable filter into the illumination system of a conventional fundus camera to enable the recording of narrow-band spectral images in time sequence from 400nm to 700nm. Postprocessing algorithms were developed to enable accurate exploitation of spectral retinal images and overcome the confounding problems associated with this technique due to the erratic eye motion and illumination variation. Several algorithms were developed to provide semi-quantitative and quantitative oxygen saturation measurements. Accurate quantitative measurements necessitated an optical model of light propagation into the retina that takes into account the absorption and scattering of light by red blood cells. To validate the oxygen saturation measurements and algorithms, a model eye was constructed and measurements were compared with gold-standard measurements obtained by a Co-Oximeter. The accuracy of the oxygen saturation measurements was (3.31%± 2.19) for oxygenated blood samples. Clinical trials from healthy and diseased subjects were analysed and oxygen saturation measurements were compared to establish a merit of certain retinal diseases. Oxygen saturation measurements were in agreement with clinician expectations in both veins (48%±9) and arteries (96%±5). We also present in this thesis the development of novel clinical instrument based on IRIS to perform retinal oximetry.Al-baath University, Syri

    Noninvasive Assessment of Photoreceptor Structure and Function in the Human Retina

    Get PDF
    The human photoreceptor mosaic underlies the first steps of vision; thus, even subtle defects in the mosaic can result in severe vision loss. The retina can be examined directly using clinical tools; however these devices lack the resolution necessary to visualize the photoreceptor mosaic. The primary limiting factor of these devices is the optical aberrations of the human eye. These aberrations are surmountable with the incorporation of adaptive optics (AO) to ophthalmoscopes, enabling imaging of the photoreceptor mosaic with cellular resolution. Despite the potential of AO imaging, much work remains before this technology can be translated to the clinic. Metrics used in the analysis of AO images are not standardized and are rarely subjected to validation, limiting the ability to reliably track structural changes in the photoreceptor mosaic geometry. Preceding the extraction of measurements, photoreceptors must be identified within the retinal image itself. This introduces error from both incorrectly identified cells and image distortion. We developed a novel method to extract measures of cell spacing from AO images that does not require identification of individual cells. In addition, we examined the sensitivity of various metrics in detecting changes in the mosaic and assessed the absolute accuracy of measurements made in the presence of image distortion. We also developed novel metrics for describing the mosaic, which may offer advantages over more traditional metrics of density and spacing. These studies provide a valuable basis for monitoring the photoreceptor mosaic longitudinally. As part of this work, we developed software (Mosaic Analytics) that can be used to standardize analytical efforts across different research groups. In addition, one of the more salient features of the appearance of individual cone photoreceptors is that they vary considerably in their reflectance. It has been proposed that this reflectance signal could be used as a surrogate measure of cone health. As a first step to understanding the cellular origin of these changes, we examined the reflectance properties of the rod photoreceptor mosaic. The observed variation in rod reflectivity over time suggests a common governing physiological process between rods and cones

    Computerised stereoscopic measurement of the human retina

    Get PDF
    The research described herein is an investigation into the problems of obtaining useful clinical measurements from stereo photographs of the human retina through automation of the stereometric procedure by digital stereo matching and image analysis techniques. Clinical research has indicated a correlation between physical changes to the optic disc topography (the region on the retina where the optic nerve enters the eye) and the advance of eye disease such as hypertension and glaucoma. Stereoscopic photography of the human retina (or fundus, as it is called) and the subsequent measurement of the topography of the optic disc is of great potential clinical value as an aid in observing the pathogenesis of such disease, and to this end, accurate measurements of the various parameters that characterise the changing shape of the optic disc topography must be provided. Following a survey of current clinical methods for stereoscopic measurement of the optic disc, fundus image data acquisition, stereo geometry, limitations of resolution and accuracy, and other relevant physical constraints related to fundus imaging are investigated. A survey of digital stereo matching algorithms is presented and their strengths and weaknesses are explored, specifically as they relate to the suitability of the algorithm for the fundus image data. The selection of an appropriate stereo matching algorithm is discussed, and its application to four test data sets is presented in detail. A mathematical model of two-dimensional image formation is developed together with its corresponding auto-correlation function. In the presense of additive noise, the model is used as a tool for exploring key problems with respect to the stereo matching of fundus images. Specifically, measures for predicting correlation matching error are developed and applied. Such measures are shown to be of use in applications where the results of image correlation cannot be independently verified, and meaningful quantitative error measures are required. The application of these theoretical tools to the fundus image data indicate a systematic way to measure, assess and control cross-correlation error. Conclusions drawn from this research point the way forward for stereo analysis of the optic disc and highlight a number of areas which will require further research. The development of a fully automated system for diagnostic evaluation of the optic disc topography is discussed in the light of the results obtained during this research

    Advanced image processing techniques for detection and quantification of drusen

    Get PDF
    Dissertation presented to obtain the degree of Doctor of Philosophy in Electrical Engineering, speciality on Perceptional Systems, by the Universidade Nova de Lisboa, Faculty of Sciences and TechnologyDrusen are common features in the ageing macula, caused by accumulation of extracellular materials beneath the retinal surface, visible in retinal fundus images as yellow spots. In the ophthalmologists’ opinion, the evaluation of the total drusen area, in a sequence of images taken during a treatment, will help to understand the disease progression and effectiveness. However, this evaluation is fastidious and difficult to reproduce when performed manually. A literature review on automated drusen detection showed that the works already published were limited to techniques of either adaptive or global thresholds which showed a tendency to produce a significant number of false positives. The purpose for this work was to propose an alternative method to automatically quantify drusen using advanced digital image processing techniques. This methodology is based on a detection and modelling algorithm to automatically quantify drusen. It includes an image pre-processing step to correct the uneven illumination by using smoothing splines fitting and to normalize the contrast. To quantify drusen a detection and modelling algorithm is adopted. The detection uses a new gradient based segmentation algorithm that isolates drusen and provides basic drusen characterization to the modelling stage. These are then fitted by Gaussian functions, to produce a model of the image, which is used to compute the affected areas. To validate the methodology, two software applications, one for semi-automated (MD3RI) and other for automated detection of drusen (AD3RI), were implemented. The first was developed for Ophthalmologists to manually analyse and mark drusen deposits, while the other implemented algorithms for automatic drusen quantification.Four studies to assess the methodology accuracy involving twelve specialists have taken place. These compared the automated method to the specialists and evaluated its repeatability. The studies were analysed regarding several indicators, which were based on the total affected area and on a pixel-to-pixel analysis. Due to the high variability among the graders involved in the first study, a new evaluation method, the Weighed Matching Analysis, was developed to improve the pixel-to-pixel analysis by using the statistical significance of the observations to differentiate positive and negative pixels. From the results of these studies it was concluded that the methodology proposed is capable to automatically measure drusen in an accurate and reproducible process. Also, the thesis proposes new image processing algorithms, for image pre-processing, image segmentation,image modelling and images comparison, which are also applicable to other image processing fields

    Deep learning for quantitative motion tracking based on optical coherence tomography

    Get PDF
    Optical coherence tomography (OCT) is a cross-sectional imaging modality based on low coherence light interferometry. OCT has been widely used in diagnostic ophthalmology and has found applications in other biomedical fields such as cancer detection and surgical guidance. In the Laboratory of Biophotonics Imaging and Sensing at New Jersey Institute of Technology, we developed a unique needle OCT imager based on a single fiber probe for breast cancer imaging. The needle OCT imager with sub-millimeter diameter can be inserted into tissue for minimally invasive in situ breast imaging. OCT imaging provides spatial resolution similar to histology and has the potential to become a device to perform virtual biopsy to fast and accurate breast cancer diagnosis, because abnormal breast tissue and normal breast tissue have different characteristics in OCT image. The morphological features of OCT image are related to the microscopic structure of the tissue and the speckle pattern in OCT image is related to cellular/subcellular optical properties of the tissue. In addition, depth attenuation of OCT signal depends on the scattering and absorption properties of the tissue. However, the above described OCT image features are at different spatial scales and it is challenging for human visualization to effectively recognize these features for tissue classification. Particularly, our needle OCT imager, given its simplicity and small form factor, does not have a mechanical scanner for beam steering and relies on manual scan to generate 2D images. The nonconstant translation speed of the probe in manual scanning inevitably introduces distortion artifacts in OCT imaging, which further complicates the tissue characterization task.] OCT images of tissue samples provide comprehensive information about the morphology of normal and unhealthy tissue. Image analysis of tissue morphology can help cancer researchers develop a better understanding of cancer biology. Classification of tissue images and recovering distorted OCT images are two common tasks in tissue image analysis. In this master thesis project, a novel deep learning approach is investigated to extract beam scanning speed from different samples. Furthermore, a novel technique is investigated and tested to recover distorted OCT images. The long-term goal of this study is to achieve robust tissue classification for breast cancer diagnosis, based on a simple single fiber OCT instrument. The deep learning network utilized in this study depends on Convolutional Neural Network (CNN) and Naïve Bayes Classifier. For image retrieval, we used algorithms that extract, represent and match common features between images. The CNN network achieved accuracy of 97% in tissue type and scanning speed classification, while the image retrieval algorithms achieved very high-quality recovered image compared to the reference image
    corecore