308 research outputs found
Motion Correction in Optical Coherence Tomography for Multi-modality Retinal Image Registration
Optical coherence tomography (OCT) is a recently developed non-invasive imaging modality, which is often used in ophthalmology. Because of the sequential scanning in form of A-scans, OCT suffers from the inevitable eye movement. This often leads to mis-alignment especially among consecutive B-scans, which affects the analysis and processing of the data such as the registration of the OCT en face image to color fundus image. In this paper, we propose a novel method to correct the mis-alignment among consecutive B-scans to improve the accuracy in multi-modality retinal image registration. In the method, we propose to compute decorrelation from overlapping B-scans and to detect the eye movement. Then, the B-scans with eye movement will be re-aligned to its precedent scans while the rest of B-scans without eye movement are untouched. Our experiments results show that the proposed method improves the accuracy and success rate in the registration to color fundus images
Analysis of Retinal Image Data to Support Glaucoma Diagnosis
Fundus kamera je ĆĄiroce dostupnĂ© zobrazovacĂ zaĆĂzenĂ, kterĂ© umoĆŸĆuje relativnÄ rychlĂ© a nenĂĄkladnĂ© vyĆĄetĆenĂ zadnĂho segmentu oka â sĂtnice. Z tÄchto dĆŻvodĆŻ se mnoho vĂœzkumnĂœch pracoviĆĄĆ„ zamÄĆuje prĂĄvÄ na vĂœvoj automatickĂœch metod diagnostiky nemocĂ sĂtnice s vyuĆŸitĂm fundus fotografiĂ. Tato dizertaÄnĂ prĂĄce analyzuje souÄasnĂœ stav vÄdeckĂ©ho poznĂĄnĂ v oblasti diagnostiky glaukomu s vyuĆŸitĂm fundus kamery a navrhuje novou metodiku hodnocenĂ vrstvy nervovĂœch vlĂĄken (VNV) na sĂtnici pomocĂ texturnĂ analĂœzy. Spolu s touto metodikou je navrĆŸena metoda segmentace cĂ©vnĂho ĆeÄiĆĄtÄ sĂtnice, jakoĆŸto dalĆĄĂ hodnotnĂœ pĆĂspÄvek k souÄasnĂ©mu stavu ĆeĆĄenĂ© problematiky. Segmentace cĂ©vnĂho ĆeÄiĆĄtÄ rovnÄĆŸ slouĆŸĂ jako nezbytnĂœ krok pĆedchĂĄzejĂcĂ analĂœzu VNV. Vedle toho prĂĄce publikuje novou volnÄ dostupnou databĂĄzi snĂmkĆŻ sĂtnice se zlatĂœmi standardy pro ĂșÄely hodnocenĂ automatickĂœch metod segmentace cĂ©vnĂho ĆeÄiĆĄtÄ.Fundus camera is widely available imaging device enabling fast and cheap examination of the human retina. Hence, many researchers focus on development of automatic methods towards assessment of various retinal diseases via fundus images. This dissertation summarizes recent state-of-the-art in the field of glaucoma diagnosis using fundus camera and proposes a novel methodology for assessment of the retinal nerve fiber layer (RNFL) via texture analysis. Along with it, a method for the retinal blood vessel segmentation is introduced as an additional valuable contribution to the recent state-of-the-art in the field of retinal image processing. Segmentation of the blood vessels also serves as a necessary step preceding evaluation of the RNFL via the proposed methodology. In addition, a new publicly available high-resolution retinal image database with gold standard data is introduced as a novel opportunity for other researches to evaluate their segmentation algorithms.
MEMO: Dataset and Methods for Robust Multimodal Retinal Image Registration with Large or Small Vessel Density Differences
The measurement of retinal blood flow (RBF) in capillaries can provide a
powerful biomarker for the early diagnosis and treatment of ocular diseases.
However, no single modality can determine capillary flowrates with high
precision. Combining erythrocyte-mediated angiography (EMA) with optical
coherence tomography angiography (OCTA) has the potential to achieve this goal,
as EMA can measure the absolute 2D RBF of retinal microvasculature and OCTA can
provide the 3D structural images of capillaries. However, multimodal retinal
image registration between these two modalities remains largely unexplored. To
fill this gap, we establish MEMO, the first public multimodal EMA and OCTA
retinal image dataset. A unique challenge in multimodal retinal image
registration between these modalities is the relatively large difference in
vessel density (VD). To address this challenge, we propose a segmentation-based
deep-learning framework (VDD-Reg) and a new evaluation metric (MSD), which
provide robust results despite differences in vessel density. VDD-Reg consists
of a vessel segmentation module and a registration module. To train the vessel
segmentation module, we further designed a two-stage semi-supervised learning
framework (LVD-Seg) combining supervised and unsupervised losses. We
demonstrate that VDD-Reg outperforms baseline methods quantitatively and
qualitatively for cases of both small VD differences (using the CF-FA dataset)
and large VD differences (using our MEMO dataset). Moreover, VDD-Reg requires
as few as three annotated vessel segmentation masks to maintain its accuracy,
demonstrating its feasibility.Comment: Submitted to IEEE JBH
Two-dimensional segmentation of the retinal vascular network from optical coherence tomography
The automatic segmentation of the retinal vascular network from ocular fundus images has been performed by several research groups. Although different approaches have been proposed for traditional imaging modalities, only a few have addressed this problem for optical coherence tomography (OCT). Furthermore, these approaches were focused on the optic nerve head region. Compared to color fundus photography and fluorescein angiography, two-dimensional ocular fundus reference images computed from three-dimensional OCT data present additional problems related to system lateral resolution, image contrast, and noise. Specifically, the combination of system lateral resolution and vessel diameter in the macular region renders the process particularly complex, which might partly explain the focus on the optic disc region. In this report, we describe a set of features computed from standard OCT data of the human macula that are used by a supervised-learning process (support vector machines) to automatically segment the vascular network. For a set of macular OCT scans of healthy subjects and diabetic patients, the proposed method achieves 98% accuracy, 99% specificity, and 83% sensitivity. This method was also tested on OCT data of the optic nerve head region achieving similar results
Two-dimensional segmentation of the retinal vascular network from optical coherence tomography
The automatic segmentation of the retinal vascular network from ocular fundus images has been performed by several research groups. Although different approaches have been proposed for traditional imaging modalities, only a few have addressed this problem for optical coherence tomography (OCT). Furthermore, these approaches were focused on the optic nerve head region. Compared to color fundus photography and fluorescein angiography, two-dimensional ocular fundus reference images computed from three-dimensional OCT data present additional problems related to system lateral resolution, image contrast, and noise. Specifically, the combination of system lateral resolution and vessel diameter in the macular region renders the process particularly complex, which might partly explain the focus on the optic disc region. In this report, we describe a set of features computed from standard OCT data of the human macula that are used by a supervised-learning process (support vector machines) to automatically segment the vascular network. For a set of macular OCT scans of healthy subjects and diabetic patients, the proposed method achieves 98% accuracy, 99% specificity, and 83% sensitivity. This method was also tested on OCT data of the optic nerve head region achieving similar results
A Review: Person Identification using Retinal Fundus Images
In this paper a review on biometric person identification has been discussed using features from retinal fundus image. Retina recognition is claimed to be the best person identification method among the biometric recognition systems as the retina is practically impossible to forge. It is found to be most stable, reliable and most secure among all other biometric systems. Retina inherits the property of uniqueness and stability. The features used in the recognition process are either blood vessel features or non-blood vessel features. But the vascular pattern is the most prominent feature utilized by most of the researchers for retina based person identification. Processes involved in this authentication system include pre-processing, feature extraction and feature matching. Bifurcation and crossover points are widely used features among the blood vessel features. Non-blood vessel features include luminance, contrast, and corner points etc. This paper summarizes and compares the different retina based authentication system. Researchers have used publicly available databases such as DRIVE, STARE, VARIA, RIDB, ARIA, AFIO, DRIDB, and SiMES for testing their methods. Various quantitative measures such as accuracy, recognition rate, false rejection rate, false acceptance rate, and equal error rate are used to evaluate the performance of different algorithms. DRIVE database provides 100\% recognition for most of the methods. Rest of the database the accuracy of recognition is more than 90\%
FRAMEWORK FOR LOW-QUAL ITY RETINAL MOSAICING
The medical equipment used to capture retinal fundus images is generally expensive.
With the development of technology and the emergence of smartphones, new portable
screening options have emerged, one of them being the D-Eye device. This and
other similar devices associated with a smartphone, when compared to specialized
equipment, present lower quality in the retinal video captured, yet with sufficient
quality to perform a medical pre-screening. From this, if necessary, individuals can
be referred for specialized screening, in order to obtain a medical diagnosis.
This dissertation contributes to the development of a framework, which is a tool
that allows grouping a set of developed and explored methods, applied to low-quality
retinal videos. Three areas of intervention were defined: the extraction of relevant
regions in video sequences; creating mosaicing images in order to obtain a summary
image of each retinal video; develop of a graphical interface to accommodate the
previous contributions.
To extract the relevant regions from these videos (the retinal zone), two methods
were proposed, one of them is based on more classical image processing approaches
such as thresholds and Hough Circle transform. The other performs the extraction
of the retinal location by applying a neural network, which is one of the methods
reported in the literature with good performance for object detection, the YOLOv4.
The mosaicing process was divided into two stages; in the first stage, the GLAMpoints
neural network was applied to extract relevant points. From these, some
transformations are carried out to have in the same referential the overlap of common
regions of the images. In the second stage, a smoothing process was performed
in the transition between images.
A graphical interface was developed to encompass all the above methods to
facilitate access to and use of them. In addition, other features were implemented,
such as comparing results with ground truth and exporting videos containing only
regions of interest
- âŠ