13 research outputs found
Segmentation, Classification, and Quality Assessment of UW-OCTA Images for the Diagnosis of Diabetic Retinopathy
Diabetic Retinopathy (DR) is a severe complication of diabetes that can cause
blindness. Although effective treatments exist (notably laser) to slow the
progression of the disease and prevent blindness, the best treatment remains
prevention through regular check-ups (at least once a year) with an
ophthalmologist. Optical Coherence Tomography Angiography (OCTA) allows for the
visualization of the retinal vascularization, and the choroid at the
microvascular level in great detail. This allows doctors to diagnose DR with
more precision. In recent years, algorithms for DR diagnosis have emerged along
with the development of deep learning and the improvement of computer hardware.
However, these usually focus on retina photography. There are no current
methods that can automatically analyze DR using Ultra-Wide OCTA (UW-OCTA). The
Diabetic Retinopathy Analysis Challenge 2022 (DRAC22) provides a standardized
UW-OCTA dataset to train and test the effectiveness of various algorithms on
three tasks: lesions segmentation, quality assessment, and DR grading. In this
paper, we will present our solutions for the three tasks of the DRAC22
challenge. The obtained results are promising and have allowed us to position
ourselves in the TOP 5 of the segmentation task, the TOP 4 of the quality
assessment task, and the TOP 3 of the DR grading task. The code is available at
\url{https://github.com/Mostafa-EHD/Diabetic_Retinopathy_OCTA}
DRAC: Diabetic Retinopathy Analysis Challenge with Ultra-Wide Optical Coherence Tomography Angiography Images
Computer-assisted automatic analysis of diabetic retinopathy (DR) is of great
importance in reducing the risks of vision loss and even blindness. Ultra-wide
optical coherence tomography angiography (UW-OCTA) is a non-invasive and safe
imaging modality in DR diagnosis system, but there is a lack of publicly
available benchmarks for model development and evaluation. To promote further
research and scientific benchmarking for diabetic retinopathy analysis using
UW-OCTA images, we organized a challenge named "DRAC - Diabetic Retinopathy
Analysis Challenge" in conjunction with the 25th International Conference on
Medical Image Computing and Computer Assisted Intervention (MICCAI 2022). The
challenge consists of three tasks: segmentation of DR lesions, image quality
assessment and DR grading. The scientific community responded positively to the
challenge, with 11, 12, and 13 teams from geographically diverse institutes
submitting different solutions in these three tasks, respectively. This paper
presents a summary and analysis of the top-performing solutions and results for
each task of the challenge. The obtained results from top algorithms indicate
the importance of data augmentation, model architecture and ensemble of
networks in improving the performance of deep learning models. These findings
have the potential to enable new developments in diabetic retinopathy analysis.
The challenge remains open for post-challenge registrations and submissions for
benchmarking future methodology developments
An optimised pixel-based classification approach for automatic white blood cells segmentation
International audienceThe pixel-based classification is an automatic approach for classifying all pixels in the image but does not take into account the spatial information for the region of interest. On the other hand, region-growing methods take into account the neighbourhood pixels information. However, in region-growing methods, a pixel-group called 'points of interest' are needed to initialise the growing process. In this paper, we proposed an optimised pixel-based classification by the cooperation of region growing strategy. This original segmentation scheme is performed in two phases for the automatic recognition of white blood cells (WBC): the first is a learning step with colour characteristics of each pixel in the image. The second is a region growing application by classifying neighbouring pixels from pixels of interest extracted by the ultimate erosion technique. This process has proved that the cooperation allows obtaining a nucleus and cytoplasm segmentation as closer to what as expected in the reference images
Semi-Supervised learning with Collaborative Bagged Multi-label K-Nearest-Neighbors
Over the last few years, Multi-label classification has received significant attention from researchers to solve many issues in many fields. The manual annotation of available datasets is time-consuming and need a huge effort from the expert, especially for Multi-label applications in which each example of learning is associated with many labels at once. To overcome the manual annotation drawback, and to take advantages from the large amounts of unlabeled data, many semi-supervised approaches were proposed in the literature to give more sophisticated and fast solutions to support the automatic labeling of the unlabeled data. In this paper, a Collaborative Bagged Multi-label K-Nearest-Neighbors (CobMLKNN) algorithm is proposed, that extend the co-Training paradigm by a Multi-label K-Nearest-Neighbors algorithm. Experiments on ten real-world Multi-label datasets show the effectiveness of CobMLKNN algorithm to improve the performance of MLKNN to learn from a small number of labeled samples by exploiting unlabeled samples
Influence of normalization and color features on super-pixel classification: application to cytological image segmentation
International audienc
RANDOM FOREST BASED CLASSIFICATION OF MEDICAL X-RAY IMAGES USING A GENETIC ALGORITHM FOR FEATURE SELECTION
Detection of diabetic retinopathy using longitudinal self-supervised learning
International audienc
LMT: Longitudinal mixing training, a framework for the prediction of disease progression using a single image
Longitudinal self-supervised learning using neural ordinary differential equation
Longitudinal analysis in medical imaging is crucial to investigate the progressive changes in anatomical structures or disease progression over time. In recent years, a novel class of algorithms has emerged with the goal of learning disease progression in a self-supervised manner, using either pairs of consecutive images or time series of images. By capturing temporal patterns without any external labels or supervision, longitudinal self-supervised learning (LSSL) has become a promising avenue. To better understand this core method, we explore in this paper the LSSL algorithm under different scenarios. The original LSSL is embedded in an auto-encoder (AE) structure. However, conventional self-supervised strategies are usually implemented in a Siamese-like manner. Therefore, (as a first novelty) in this study, we explore the use of Siamese-like LSSL. Another new core framework named neural ordinary differential equation (NODE). NODE is a neural network architecture that learns the dynamics of ordinary differential equations (ODE) through the use of neural networks. Many temporal systems can be described by ODE, including disease progression modeling. We believe that there is an interesting connection to make between LSSL and NODE. This paper aims at providing a better understanding of those core algorithms for learning the disease progression with the mentioned change. In our different experiments, we employ a longitudinal dataset, named OPHDIAT, targeting diabetic retinopathy (DR) follow-up. Our results demonstrate the application of LSSL without including a reconstruction term, as well as the potential of incorporating NODE in conjunction with LSSL
Multimodal information fusion for glaucoma and diabetic retinopathy classification
International audienc