68 research outputs found

    Registration of SD-OCT en-face images with color fundus photographs based on local patch matching

    Get PDF
    Registration of multi-modal retinal images is very significant to integrate information gained from different modalities for a reliable diagnosis of retinal diseases by ophthalmologists. However, accurate image registration is a challenging, we propose an algorithm for registration of summed-voxel projection images (SVPIs) with color fundus photographs (CFPs) based on local patch matching. SVPIs are evenly split into 16 local image blocks for extracting matching point pairs by searching local maximization of the similarity function. These matching point pairs are used for a coarse registration and then a search region of feature matching points is redefined for a more accurate registration. The performance of our registration algorithm is tested on a series of datasets including 3 normal eyes and 20 eyes with age-related macular degeneration. The experiment demonstrates that the proposed method can achieve accurate registration results (the average of root mean square error is 128μm)

    Ophthalmologic Image Registration Based on Shape-Context: Application to Fundus Autofluorescence (FAF) Images

    No full text
    Online access to subscriber only at http://www.actapress.com/Content_Of_Proceeding.aspx?ProceedingID=494International audienceA novel registration algorithm, which was developed in order to facilitate ophthalmologic image processing, is presented in this paper. It has been evaluated on FAF images, which present low Si gnal/Noise Ratio (SNR) and variations in dynamic grayscale range. These characteristics complicate the registration process and cause a failure to area-based registration techniques [1, 2] . Our method is based on shape-context theory [3] . In the first step, images are enhanced by Gaussian model based histog ram modification. Features are extracted in the next step by morphological operators, which are used to detect an approximation of vascular tree from both reference and floating images. Simplified medial axis of vessels is then calculated. From each image, a set of control points called Bifurcation Points (BPs) is extracted from the medial axis through a new fast algorithm. Radial histogram is formed for each BP using the medial axis. The Chi2 distance is measured between two sets of BPs based on radial histogram. Hungarian algorithm is applied to assign the correspondence among BPs from reference and floating images. The algorithmic robustness is evaluated by mutual information criteria between manual registration considered as Ground Truth and automatic one

    Location of the optic disc in scanning laser ophthalmoscope images and validation

    Get PDF
    In this thesis we propose two methods for optic disc (OD) localization in scanning laser ophthalmoscope (SLO) images. The methods share a locating phase, while differ in the OD segmentation. We tested the algorithms on a pilot of 50 images (1536x1536) from a Heildelberg SPECTRALIS SLO camera, annotated by four expert ophthalmologists. The second algorithm performs better than the first one achieving accuracy of 90%. We compared also our methods with a validated OD algorithm on fundus images

    Detection of Optic Disc Centre Point in Retinal Image

    Get PDF
    Glaucoma and diabetic retinopathy (DR) are the two most common retinal related diseases occurred in the world. Glaucoma can be diagnosed by measuring optic cup to disc ratio (CDR) defined as optic cup to optic disc vertical diameter ratio of retinal fundus image. A computer based optic disc is expected to assist the ophthalmologist to find their location which are necessary for glaucoma and DR diagnosis. However, many optic disc detection algorithms available now are commonly non-automatic and only work in healthy retinal image. Therefore, there is not information on how optic disc in retinal image of unhealthy patient can be extracted computationally. In this research work, the method for automated detection of optic disc on retinal colour fundus images has been developed to facilitate and assist ophthalmologists in the diagnosis of retinal related diseases. The results indicated that the proposed method can be implemented in computer aided diagnosis of glaucoma and diabetic retinopathy system development

    Retinal Fundus Image Analysis for Diagnosis of Glaucoma: A Comprehensive Survey

    Full text link
    © 2016 IEEE. The rapid development of digital imaging and computer vision has increased the potential of using the image processing technologies in ophthalmology. Image processing systems are used in standard clinical practices with the development of medical diagnostic systems. The retinal images provide vital information about the health of the sensory part of the visual system. Retinal diseases, such as glaucoma, diabetic retinopathy, age-related macular degeneration, Stargardt's disease, and retinopathy of prematurity, can lead to blindness manifest as artifacts in the retinal image. An automated system can be used for offering standardized large-scale screening at a lower cost, which may reduce human errors, provide services to remote areas, as well as free from observer bias and fatigue. Treatment for retinal diseases is available; the challenge lies in finding a cost-effective approach with high sensitivity and specificity that can be applied to large populations in a timely manner to identify those who are at risk at the early stages of the disease. The progress of the glaucoma disease is very often quiet in the early stages. The number of people affected has been increasing and patients are seldom aware of the disease, which can cause delay in the treatment. A review of how computer-aided approaches may be applied in the diagnosis and staging of glaucoma is discussed here. The current status of the computer technology is reviewed, covering localization and segmentation of the optic nerve head, pixel level glaucomatic changes, diagonosis using 3-D data sets, and artificial neural networks for detecting the progression of the glaucoma disease

    Generalizable automated pixel-level structural segmentation of medical and biological data

    Get PDF
    Over the years, the rapid expansion in imaging techniques and equipments has driven the demand for more automation in handling large medical and biological data sets. A wealth of approaches have been suggested as optimal solutions for their respective imaging types. These solutions span various image resolutions, modalities and contrast (staining) mechanisms. Few approaches generalise well across multiple image types, contrasts or resolution. This thesis proposes an automated pixel-level framework that addresses 2D, 2D+t and 3D structural segmentation in a more generalizable manner, yet has enough adaptability to address a number of specific image modalities, spanning retinal funduscopy, sequential fluorescein angiography and two-photon microscopy. The pixel-level segmentation scheme involves: i ) constructing a phase-invariant orientation field of the local spatial neighbourhood; ii ) combining local feature maps with intensity-based measures in a structural patch context; iii ) using a complex supervised learning process to interpret the combination of all the elements in the patch in order to reach a classification decision. This has the advantage of transferability from retinal blood vessels in 2D to neural structures in 3D. To process the temporal components in non-standard 2D+t retinal angiography sequences, we first introduce a co-registration procedure: at the pairwise level, we combine projective RANSAC with a quadratic homography transformation to map the coordinate systems between any two frames. At the joint level, we construct a hierarchical approach in order for each individual frame to be registered to the global reference intra- and inter- sequence(s). We then take a non-training approach that searches in both the spatial neighbourhood of each pixel and the filter output across varying scales to locate and link microvascular centrelines to (sub-) pixel accuracy. In essence, this \link while extract" piece-wise segmentation approach combines the local phase-invariant orientation field information with additional local phase estimates to obtain a soft classification of the centreline (sub-) pixel locations. Unlike retinal segmentation problems where vasculature is the main focus, 3D neural segmentation requires additional exibility, allowing a variety of structures of anatomical importance yet with different geometric properties to be differentiated both from the background and against other structures. Notably, cellular structures, such as Purkinje cells, neural dendrites and interneurons, all display certain elongation along their medial axes, yet each class has a characteristic shape captured by an orientation field that distinguishes it from other structures. To take this into consideration, we introduce a 5D orientation mapping to capture these orientation properties. This mapping is incorporated into the local feature map description prior to a learning machine. Extensive performance evaluations and validation of each of the techniques presented in this thesis is carried out. For retinal fundus images, we compute Receiver Operating Characteristic (ROC) curves on existing public databases (DRIVE & STARE) to assess and compare our algorithms with other benchmark methods. For 2D+t retinal angiography sequences, we compute the error metrics ("Centreline Error") of our scheme with other benchmark methods. For microscopic cortical data stacks, we present segmentation results on both surrogate data with known ground-truth and experimental rat cerebellar cortex two-photon microscopic tissue stacks.Open Acces

    Advanced retinal imaging: Feature extraction, 2-D registration, and 3-D reconstruction

    Get PDF
    In this dissertation, we have studied feature extraction and multiple view geometry in the context of retinal imaging. Specifically, this research involves three components, i.e., feature extraction, 2-D registration, and 3-D reconstruction. First, the problem of feature extraction is investigated. Features are significantly important in motion estimation techniques because they are the input to the algorithms. We have proposed a feature extraction algorithm for retinal images. Bifurcations/crossovers are used as features. A modified local entropy thresholding algorithm based on a new definition of co-occurrence matrix is proposed. Then, we consider 2-D retinal image registration which is the problem of the transformation of 2-D/2-D. Both linear and nonlinear models are incorporated to account for motions and distortions. A hybrid registration method has been introduced in order to take advantages of both feature-based and area-based methods have offered along with relevant decision-making criteria. Area-based binary mutual information is proposed or translation estimation. A feature-based hierarchical registration technique, which involves the affine and quadratic transformations, is developed. After that, a 3-D retinal surface reconstruction issue has been addressed. To generate a 3-D scene from 2-D images, a camera projection or transformations of 3-D/2-D techniques have been investigated. We choose an affine camera to characterize for 3-D retinal reconstruction. We introduce a constrained optimization procedure which incorporates a geometrically penalty function and lens distortion into the cost function. The procedure optimizes all of the parameters, camera's parameters, 3-D points, the physical shape of human retina, and lens distortion, simultaneously. Then, a point-based spherical fitting method is introduced. The proposed retinal imaging techniques will pave the path to a comprehensive visual 3-D retinal model for many medical applications

    FRAMEWORK FOR LOW-QUAL ITY RETINAL MOSAICING

    Get PDF
    The medical equipment used to capture retinal fundus images is generally expensive. With the development of technology and the emergence of smartphones, new portable screening options have emerged, one of them being the D-Eye device. This and other similar devices associated with a smartphone, when compared to specialized equipment, present lower quality in the retinal video captured, yet with sufficient quality to perform a medical pre-screening. From this, if necessary, individuals can be referred for specialized screening, in order to obtain a medical diagnosis. This dissertation contributes to the development of a framework, which is a tool that allows grouping a set of developed and explored methods, applied to low-quality retinal videos. Three areas of intervention were defined: the extraction of relevant regions in video sequences; creating mosaicing images in order to obtain a summary image of each retinal video; develop of a graphical interface to accommodate the previous contributions. To extract the relevant regions from these videos (the retinal zone), two methods were proposed, one of them is based on more classical image processing approaches such as thresholds and Hough Circle transform. The other performs the extraction of the retinal location by applying a neural network, which is one of the methods reported in the literature with good performance for object detection, the YOLOv4. The mosaicing process was divided into two stages; in the first stage, the GLAMpoints neural network was applied to extract relevant points. From these, some transformations are carried out to have in the same referential the overlap of common regions of the images. In the second stage, a smoothing process was performed in the transition between images. A graphical interface was developed to encompass all the above methods to facilitate access to and use of them. In addition, other features were implemented, such as comparing results with ground truth and exporting videos containing only regions of interest
    • …
    corecore