1,944 research outputs found
Joint segmentation and classification of retinal arteries/veins from fundus images
Objective Automatic artery/vein (A/V) segmentation from fundus images is
required to track blood vessel changes occurring with many pathologies
including retinopathy and cardiovascular pathologies. One of the clinical
measures that quantifies vessel changes is the arterio-venous ratio (AVR) which
represents the ratio between artery and vein diameters. This measure
significantly depends on the accuracy of vessel segmentation and classification
into arteries and veins. This paper proposes a fast, novel method for semantic
A/V segmentation combining deep learning and graph propagation.
Methods A convolutional neural network (CNN) is proposed to jointly segment
and classify vessels into arteries and veins. The initial CNN labeling is
propagated through a graph representation of the retinal vasculature, whose
nodes are defined as the vessel branches and edges are weighted by the cost of
linking pairs of branches. To efficiently propagate the labels, the graph is
simplified into its minimum spanning tree.
Results The method achieves an accuracy of 94.8% for vessels segmentation.
The A/V classification achieves a specificity of 92.9% with a sensitivity of
93.7% on the CT-DRIVE database compared to the state-of-the-art-specificity and
sensitivity, both of 91.7%.
Conclusion The results show that our method outperforms the leading previous
works on a public dataset for A/V classification and is by far the fastest.
Significance The proposed global AVR calculated on the whole fundus image
using our automatic A/V segmentation method can better track vessel changes
associated to diabetic retinopathy than the standard local AVR calculated only
around the optic disc.Comment: Preprint accepted in Artificial Intelligence in Medicin
Deep Neural Ensemble for Retinal Vessel Segmentation in Fundus Images towards Achieving Label-free Angiography
Automated segmentation of retinal blood vessels in label-free fundus images
entails a pivotal role in computed aided diagnosis of ophthalmic pathologies,
viz., diabetic retinopathy, hypertensive disorders and cardiovascular diseases.
The challenge remains active in medical image analysis research due to varied
distribution of blood vessels, which manifest variations in their dimensions of
physical appearance against a noisy background.
In this paper we formulate the segmentation challenge as a classification
task. Specifically, we employ unsupervised hierarchical feature learning using
ensemble of two level of sparsely trained denoised stacked autoencoder. First
level training with bootstrap samples ensures decoupling and second level
ensemble formed by different network architectures ensures architectural
revision. We show that ensemble training of auto-encoders fosters diversity in
learning dictionary of visual kernels for vessel segmentation. SoftMax
classifier is used for fine tuning each member auto-encoder and multiple
strategies are explored for 2-level fusion of ensemble members. On DRIVE
dataset, we achieve maximum average accuracy of 95.33\% with an impressively
low standard deviation of 0.003 and Kappa agreement coefficient of 0.708 .
Comparison with other major algorithms substantiates the high efficacy of our
model.Comment: Accepted as a conference paper at IEEE EMBC, 201
Deep Learning in Cardiology
The medical field is creating large amount of data that physicians are unable
to decipher and use efficiently. Moreover, rule-based expert systems are
inefficient in solving complicated medical tasks or for creating insights using
big data. Deep learning has emerged as a more accurate and effective technology
in a wide range of medical problems such as diagnosis, prediction and
intervention. Deep learning is a representation learning method that consists
of layers that transform the data non-linearly, thus, revealing hierarchical
relationships and structures. In this review we survey deep learning
application papers that use structured data, signal and imaging modalities from
cardiology. We discuss the advantages and limitations of applying deep learning
in cardiology that also apply in medicine in general, while proposing certain
directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table
Supervised machine learning based multi-task artificial intelligence classification of retinopathies
Artificial intelligence (AI) classification holds promise as a novel and
affordable screening tool for clinical management of ocular diseases. Rural and
underserved areas, which suffer from lack of access to experienced
ophthalmologists may particularly benefit from this technology. Quantitative
optical coherence tomography angiography (OCTA) imaging provides excellent
capability to identify subtle vascular distortions, which are useful for
classifying retinovascular diseases. However, application of AI for
differentiation and classification of multiple eye diseases is not yet
established. In this study, we demonstrate supervised machine learning based
multi-task OCTA classification. We sought 1) to differentiate normal from
diseased ocular conditions, 2) to differentiate different ocular disease
conditions from each other, and 3) to stage the severity of each ocular
condition. Quantitative OCTA features, including blood vessel tortuosity (BVT),
blood vascular caliber (BVC), vessel perimeter index (VPI), blood vessel
density (BVD), foveal avascular zone (FAZ) area (FAZ-A), and FAZ contour
irregularity (FAZ-CI) were fully automatically extracted from the OCTA images.
A stepwise backward elimination approach was employed to identify sensitive
OCTA features and optimal-feature-combinations for the multi-task
classification. For proof-of-concept demonstration, diabetic retinopathy (DR)
and sickle cell retinopathy (SCR) were used to validate the supervised machine
leaning classifier. The presented AI classification methodology is applicable
and can be readily extended to other ocular diseases, holding promise to enable
a mass-screening platform for clinical deployment and telemedicine.Comment: Supplemental material attached at the en
- …