1,128 research outputs found
Retinal vessel segmentation using textons
Segmenting vessels from retinal images, like segmentation in many other medical image domains, is a challenging task, as there is no unified way that can be adopted to extract the vessels accurately. However, it is the most critical stage in automatic assessment of various forms of diseases (e.g. Glaucoma, Age-related macular degeneration, diabetic retinopathy and cardiovascular diseases etc.). Our research aims to investigate retinal image segmentation approaches based on textons as they provide a compact description of texture that can be learnt from a training set. This thesis presents a brief review of those diseases and also includes their current situations, future trends and techniques used for their automatic diagnosis in routine clinical applications. The importance of retinal vessel segmentation is
particularly emphasized in such applications. An extensive review of previous work on retinal vessel segmentation and salient texture analysis methods is presented. Five automatic retinal vessel segmentation methods are proposed in this thesis. The first method focuses on addressing the problem of removing pathological anomalies (Drusen, exudates) for retinal vessel segmentation, which have been identified by other researchers as a problem and a common source of error. The results show that the modified method shows some
improvement compared to a previously published method. The second novel supervised segmentation method employs textons. We propose a new filter bank (MR11) that includes bar detectors for vascular feature extraction and other kernels to detect edges and photometric variations in the image. The k-means clustering algorithm is adopted for texton generation based on the vessel and non-vessel elements which are identified by ground truth. The third improved supervised method is developed based on the second one, in which textons are generated by k-means clustering and texton maps representing vessels are derived by back projecting pixel clusters onto hand labelled ground truth. A further step is implemented to ensure that the best combinations of textons are represented in the map and subsequently used to identify vessels in the test set. The experimental results on two benchmark datasets show that our proposed method performs well compared to other published work and the results of human experts. A further test of our system on an independent set of optical fundus images verified its consistent performance. The statistical analysis on experimental results also reveals that it is possible to train unified textons for retinal vessel segmentation. In the fourth method a novel scheme using Gabor filter bank for vessel feature extraction is proposed. The ii method is inspired by the human visual system. Machine learning is used to optimize the
Gabor filter parameters. The experimental results demonstrate that our method significantly enhances the true positive rate while maintaining a level of specificity that is comparable with other approaches. Finally, we proposed a new unsupervised texton based retinal vessel
segmentation method using derivative of SIFT and multi-scale Gabor filers. The lack of sufficient quantities of hand labelled ground truth and the high level of variability in ground truth labels amongst experts provides the motivation for this approach. The evaluation results
reveal that our unsupervised segmentation method is comparable with the best other supervised methods and other best state of the art methods
GGM classifier with multi-scale line detectors for retinal vessel segmentation
Persistent changes in the diameter of retinal blood vessels may indicate some chronic eye diseases. Computer-assisted change observation attempts may become challenging due to the emergence of interfering pathologies around blood vessels in retinal fundus images. The end result is lower sensitivity to thin vessels for certain computerized detection methods. Quite recently, multi-scale line detection method proved to be worthy for improved sensitivity toward lower-caliber vessels detection. This happens largely due to its adaptive property that responds more to the longevity patterns than width of a given vessel. However, the method suffers from the lack of a better aggregation process for individual line detectors. This paper investigates a scenario that introduces a supervised generalized Gaussian mixture classifier as a robust solution for the aggregate process. The classifier is built with class-conditional probability density functions as a logistic function of linear mixtures. To boost the classifier’s performance, the weighted scale images are modeled as Gaussian mixtures. The classifier is trained with weighted images modeled on a Gaussian mixture. The net effect is increased sensitivity for small vessels. The classifier’s performance has been tested with three commonly available data sets: DRIVE, SATRE, and CHASE_DB1. The results of the proposed method (with an accuracy of 96%, 96.1% and 95% on DRIVE, STARE, and CHASE_DB1, respectively) demonstrate its competitiveness against the state-of-the-art methods and its reliability for vessel segmentation
Empirical Study of Vessel Extraction Algorithms
Medical imaging is a technique for creating an image of the human body in order to diagnose various diseases such as stenosis, aneurysm, arterial venous malformation, thrombus, plaque and internal bleeding. Blood vessel segmentation is critical in the diagnosis of a variety of diseases. Blood vessels that are segmented give much useful information about their anatomy and location. They are important in a variety of medical applications, including diagnostic, surgical therapy, and radiation treatments. A significant amount of research has gone into vessel segmentation, and a variety of techniques has emerged as a result. In addition, there are different segmentation techniques such as active contour segmentation technique, hybrid segmentation technique, thresholding segmentation techniques, watershed segmentation techniques, edge detection segmentation technique, etc. It is also observed that magnetic resonance images of blood vessels were exposed to noise due to selection and inappropriate techniques such poor performance invisibility. In other words, there is no single approach to follow for a perfect outcome of images. There are some of the methods that use gray-level histograms, while there are others that integrate spatial image information, and this causes noisy outcomes. Therefore, we build the medical imaging vessel visualization system using MATLAB as tool. In this study, we empirically investigate the visibility performance vessel extraction algorithm. We implement following vessel extraction algorithms: active contour algorithm and edge detection algorithm. We observed that edge detection algorithm (SOBEL) is the better in term of image clarity as compared to active contour and edge detection algorithm. This project enable IS department to do more advanced level research in medical imaging
Deep convolutional neural networks for segmenting 3D in vivo multiphoton images of vasculature in Alzheimer disease mouse models
The health and function of tissue rely on its vasculature network to provide
reliable blood perfusion. Volumetric imaging approaches, such as multiphoton
microscopy, are able to generate detailed 3D images of blood vessels that could
contribute to our understanding of the role of vascular structure in normal
physiology and in disease mechanisms. The segmentation of vessels, a core image
analysis problem, is a bottleneck that has prevented the systematic comparison
of 3D vascular architecture across experimental populations. We explored the
use of convolutional neural networks to segment 3D vessels within volumetric in
vivo images acquired by multiphoton microscopy. We evaluated different network
architectures and machine learning techniques in the context of this
segmentation problem. We show that our optimized convolutional neural network
architecture, which we call DeepVess, yielded a segmentation accuracy that was
better than both the current state-of-the-art and a trained human annotator,
while also being orders of magnitude faster. To explore the effects of aging
and Alzheimer's disease on capillaries, we applied DeepVess to 3D images of
cortical blood vessels in young and old mouse models of Alzheimer's disease and
wild type littermates. We found little difference in the distribution of
capillary diameter or tortuosity between these groups, but did note a decrease
in the number of longer capillary segments () in aged animals as
compared to young, in both wild type and Alzheimer's disease mouse models.Comment: 34 pages, 9 figure
Coronary Artery Centerline Extraction in Cardiac CT Angiography Using a CNN-Based Orientation Classifier
Coronary artery centerline extraction in cardiac CT angiography (CCTA) images
is a prerequisite for evaluation of stenoses and atherosclerotic plaque. We
propose an algorithm that extracts coronary artery centerlines in CCTA using a
convolutional neural network (CNN).
A 3D dilated CNN is trained to predict the most likely direction and radius
of an artery at any given point in a CCTA image based on a local image patch.
Starting from a single seed point placed manually or automatically anywhere in
a coronary artery, a tracker follows the vessel centerline in two directions
using the predictions of the CNN. Tracking is terminated when no direction can
be identified with high certainty.
The CNN was trained using 32 manually annotated centerlines in a training set
consisting of 8 CCTA images provided in the MICCAI 2008 Coronary Artery
Tracking Challenge (CAT08). Evaluation using 24 test images of the CAT08
challenge showed that extracted centerlines had an average overlap of 93.7%
with 96 manually annotated reference centerlines. Extracted centerline points
were highly accurate, with an average distance of 0.21 mm to reference
centerline points. In a second test set consisting of 50 CCTA scans, 5,448
markers in the coronary arteries were used as seed points to extract single
centerlines. This showed strong correspondence between extracted centerlines
and manually placed markers. In a third test set containing 36 CCTA scans,
fully automatic seeding and centerline extraction led to extraction of on
average 92% of clinically relevant coronary artery segments.
The proposed method is able to accurately and efficiently determine the
direction and radius of coronary arteries. The method can be trained with
limited training data, and once trained allows fast automatic or interactive
extraction of coronary artery trees from CCTA images.Comment: Accepted in Medical Image Analysi
Tracking and diameter estimation of retinal vessels using Gaussian process and Radon transform
Extraction of blood vessels in retinal images is an important step for computer-aided diagnosis of
ophthalmic pathologies. We propose an approach for blood vessel tracking and diameter estimation. We hypothesize
that the curvature and the diameter of blood vessels are Gaussian processes (GPs). Local Radon transform,
which is robust against noise, is subsequently used to compute the features and train the GPs. By learning
the kernelized covariance matrix from training data, vessel direction and its diameter are estimated. In order to
detect bifurcations, multiple GPs are used and the difference between their corresponding predicted directions is
quantified. The combination of Radon features and GP results in a good performance in the presence of noise.
The proposed method successfully deals with typically difficult cases such as bifurcations and central arterial
reflex, and also tracks thin vessels with high accuracy. Experiments are conducted on the publicly available
DRIVE, STARE, CHASEDB1, and high-resolution fundus databases evaluating sensitivity, specificity, and
Matthew’s correlation coefficient (MCC). Experimental results on these datasets show that the proposed method
reaches an average sensitivity of 75.67%, specificity of 97.46%, and MCC of 72.18% which is comparable to the
state-of-the-art
A Deep Learning Approach to Denoise Optical Coherence Tomography Images of the Optic Nerve Head
Purpose: To develop a deep learning approach to de-noise optical coherence
tomography (OCT) B-scans of the optic nerve head (ONH).
Methods: Volume scans consisting of 97 horizontal B-scans were acquired
through the center of the ONH using a commercial OCT device (Spectralis) for
both eyes of 20 subjects. For each eye, single-frame (without signal
averaging), and multi-frame (75x signal averaging) volume scans were obtained.
A custom deep learning network was then designed and trained with 2,328 "clean
B-scans" (multi-frame B-scans), and their corresponding "noisy B-scans" (clean
B-scans + gaussian noise) to de-noise the single-frame B-scans. The performance
of the de-noising algorithm was assessed qualitatively, and quantitatively on
1,552 B-scans using the signal to noise ratio (SNR), contrast to noise ratio
(CNR), and mean structural similarity index metrics (MSSIM).
Results: The proposed algorithm successfully denoised unseen single-frame OCT
B-scans. The denoised B-scans were qualitatively similar to their corresponding
multi-frame B-scans, with enhanced visibility of the ONH tissues. The mean SNR
increased from dB (single-frame) to dB
(denoised). For all the ONH tissues, the mean CNR increased from (single-frame) to (denoised). The MSSIM increased from
(single frame) to (denoised) when compared with
the corresponding multi-frame B-scans.
Conclusions: Our deep learning algorithm can denoise a single-frame OCT
B-scan of the ONH in under 20 ms, thus offering a framework to obtain superior
quality OCT B-scans with reduced scanning times and minimal patient discomfort
Study of Image Local Scale Structure Using Nonlinear Diffusion
Multi-scale representation and local scale extraction of images are important in computer vision research, as in general , structures within images are unknown. Traditionally, the multi-scale analysis is based on the linear diusion (i.e. heat diusion) with known limitation in edge distortions. In addition, the term scale which is used
widely in multi-scale and local scale analysis does not have a consistent denition and it can pose potential diculties in real image analysis, especially for the proper interpretation of scale as a geometric measure. In this study, in order to overcome
limitations of linear diusion, we focus on the multi-scale analysis based on total variation minimization model. This model has been used in image denoising with the power that it can preserve edge structures. Based on the total variation model, we construct the multi-scale space and propose a denition for image local scale. The
new denition of local scale incorporates both pixel-wise and orientation information.
This denition can be interpreted with a clear geometrical meaning and applied in general image analysis. The potential applications of total variation model in retinal fundus image analysis is explored. The existence of blood vessel and drusen structures within a single fundus image makes the image analysis a challenging problem.
A multi-scale model based on total variation is used, showing the capabilities in both drusen and blood vessel detections. The performance of vessel detection is compared with publicly available methods, showing the improvements both quantitatively and
qualitatively. This study provides a better insight into local scale study and shows the potentials of total variation model in medical image analysis
Digital ocular fundus imaging: a review
Ocular fundus imaging plays a key role in monitoring the health status of the human eye. Currently, a large number of imaging modalities allow the assessment and/or quantification of ocular changes from a healthy status. This review focuses on the main digital fundus imaging modality, color fundus photography, with a brief overview of complementary techniques, such as fluorescein angiography. While focusing on two-dimensional color fundus photography, the authors address the evolution from nondigital to digital imaging and its impact on diagnosis. They also compare several studies performed along the transitional path of this technology. Retinal image processing and analysis, automated disease detection and identification of the stage of diabetic retinopathy (DR) are addressed as well. The authors emphasize the problems of image segmentation, focusing on the major landmark structures of the ocular fundus: the vascular network, optic disk and the fovea. Several proposed approaches for the automatic detection of signs of disease onset and progression, such as microaneurysms, are surveyed. A thorough comparison is conducted among different studies with regard to the number of eyes/subjects, imaging modality, fundus camera used, field of view and image resolution to identify the large variation in characteristics from one study to another. Similarly, the main features of the proposed classifications and algorithms for the automatic detection of DR are compared, thereby addressing computer-aided diagnosis and computer-aided detection for use in screening programs.Fundação para a Ciência e TecnologiaFEDErPrograma COMPET
- …