11 research outputs found

    Deep learning for diabetic retinopathy detection and classification based on fundus images: A review.

    Get PDF
    Diabetic Retinopathy is a retina disease caused by diabetes mellitus and it is the leading cause of blindness globally. Early detection and treatment are necessary in order to delay or avoid vision deterioration and vision loss. To that end, many artificial-intelligence-powered methods have been proposed by the research community for the detection and classification of diabetic retinopathy on fundus retina images. This review article provides a thorough analysis of the use of deep learning methods at the various steps of the diabetic retinopathy detection pipeline based on fundus images. We discuss several aspects of that pipeline, ranging from the datasets that are widely used by the research community, the preprocessing techniques employed and how these accelerate and improve the models' performance, to the development of such deep learning models for the diagnosis and grading of the disease as well as the localization of the disease's lesions. We also discuss certain models that have been applied in real clinical settings. Finally, we conclude with some important insights and provide future research directions

    Region-adaptive magnetic resonance image enhancement for improving CNN-based segmentation of the prostate and prostatic zones

    No full text
    Abstract Automatic segmentation of the prostate of and the prostatic zones on MRI remains one of the most compelling research areas. While different image enhancement techniques are emerging as powerful tools for improving the performance of segmentation algorithms, their application still lacks consensus due to contrasting evidence regarding performance improvement and cross-model stability, further hampered by the inability to explain models’ predictions. Particularly, for prostate segmentation, the effectiveness of image enhancement on different Convolutional Neural Networks (CNN) remains largely unexplored. The present work introduces a novel image enhancement method, named RACLAHE, to enhance the performance of CNN models for segmenting the prostate’s gland and the prostatic zones. The improvement in performance and consistency across five CNN models (U-Net, U-Net++, U-Net3+, ResU-net and USE-NET) is compared against four popular image enhancement methods. Additionally, a methodology is proposed to explain, both quantitatively and qualitatively, the relation between saliency maps and ground truth probability maps. Overall, RACLAHE was the most consistent image enhancement algorithm in terms of performance improvement across CNN models with the mean increase in Dice Score ranging from 3 to 9% for the different prostatic regions, while achieving minimal inter-model variability. The integration of a feature driven methodology to explain the predictions after applying image enhancement methods, enables the development of a concrete, trustworthy automated pipeline for prostate segmentation on MR images

    Segmenting 20 Types of Pollen Grains for the Cretan Pollen Dataset v1 (CPD-1)

    No full text
    Pollen analysis and the classification of several pollen species is an important task in melissopalynology. The development of machine learning or deep learning based classification models depends on available datasets of pollen grains from various plant species from around the globe. In this paper, Cretan Pollen Dataset v1 (CPD-1) is presented, which is a novel dataset of grains from 20 pollen species from plants gathered in Crete, Greece. The pollen grains were prepared and stained with fuchsin, in order to be captured by a camera attached to a microscope under a ×400 magnification. In addition, a pollen grain segmentation method is presented, which segments and crops each unique pollen grain and achieved an overall detection accuracy of 92%. The final dataset comprises 4034 segmented pollen grains of 20 different pollen species, as well as the raw data and ground truth, as annotated by an expert. The developed dataset is publicly accessible, which we hope will accelerate research in melissopalynology

    Pollen Grain Classification Based on Ensemble Transfer Learning on the Cretan Pollen Dataset

    No full text
    Pollen identification is an important task for the botanical certification of honey. It is performed via thorough microscopic examination of the pollen present in honey; a process called melissopalynology. However, manual examination of the images is hard, time-consuming and subject to inter- and intra-observer variability. In this study, we investigated the applicability of deep learning models for the classification of pollen-grain images into 20 pollen types, based on the Cretan Pollen Dataset. In particular, we applied transfer and ensemble learning methods to achieve an accuracy of 97.5%, a sensitivity of 96.9%, a precision of 97%, an F1 score of 96.89% and an AUC of 0.9995. However, in a preliminary case study, when we applied the best-performing model on honey-based pollen-grain images, we found that it performed poorly; only 0.02 better than random guessing (i.e., an AUC of 0.52). This indicates that the model should be further fine-tuned on honey-based pollen-grain images to increase its effectiveness on such data

    Dissecting Tumor-Immune Microenvironment in Breast Cancer at a Spatial and Multiplex Resolution

    No full text
    The evaluation of breast cancer immune microenvironment has been increasingly used in clinical practice, either by counting tumor infiltrating lymphocytes or assessing programmed death ligand 1 expression. However, the spatiotemporal organization of anti-breast cancer immune response has yet to be fully explored. Multiplex in situ methods with spectral imaging have emerged to deconvolute the different elements of tumor immune microenvironment. In this narrative review, we provide an overview of the impact that those methods have, to characterize spatiotemporal heterogeneity of breast cancer microenvironment at neoadjuvant, adjuvant and metastatic setting. Multiplexing in situ can then be useful for new classifications of tumor microenvironment and discovery of immune-related biomarkers within their spatial niche. The tumor immune microenvironment (TIME) is an important player in breast cancer pathophysiology. Surrogates for antitumor immune response have been explored as predictive biomarkers to immunotherapy, though with several limitations. Immunohistochemistry for programmed death ligand 1 suffers from analytical problems, immune signatures are devoid of spatial information and histopathological evaluation of tumor infiltrating lymphocytes exhibits interobserver variability. Towards improved understanding of the complex interactions in TIME, several emerging multiplex in situ methods are being developed and gaining much attention for protein detection. They enable the simultaneous evaluation of multiple targets in situ, detection of cell densities/subpopulations as well as estimations of functional states of immune infiltrate. Furthermore, they can characterize spatial organization of TIME-by cell-to-cell interaction analyses and the evaluation of distribution within different regions of interest and tissue compartments-while digital imaging and image analysis software allow for reproducibility of the various assays. In this review, we aim to provide an overview of the different multiplex in situ methods used in cancer research with special focus on breast cancer TIME at the neoadjuvant, adjuvant and metastatic setting. Spatial heterogeneity of TIME and importance of longitudinal evaluation of TIME changes under the pressure of therapy and metastatic progression are also addressed

    IVUS Longitudinal and Axial Registration for Atherosclerosis Progression Evaluation

    Get PDF
    Intravascular ultrasound (IVUS) imaging offers accurate cross-sectional vessel information. To this end, registering temporal IVUS pullbacks acquired at two time points can assist the clinicians to accurately assess pathophysiological changes in the vessels, disease progression and the effect of the treatment intervention. In this paper, we present a novel two-stage registration framework for aligning pairs of longitudinal and axial IVUS pullbacks. Initially, we use a Dynamic Time Warping (DTW)-based algorithm to align the pullbacks in a temporal fashion. Subsequently, an intensity-based registration method, that utilizes a variant of the Harmony Search optimizer to register each matched pair of the pullbacks by maximizing their Mutual Information, is applied. The presented method is fully automated and only required two single global image-based measurements, unlike other methods that require extraction of morphology-based features. The data used includes 42 synthetically generated pullback pairs, achieving an alignment error of 0.1853 frames per pullback, a rotation error 0.93° and a translation error of 0.0161 mm. In addition, it was also tested on 11 baseline and follow-up, and 10 baseline and post-stent deployment real IVUS pullback pairs from two clinical centres, achieving an alignment error of 4.3±3.9 for the longitudinal registration, and a distance and a rotational error of 0.56±0.323 mm and 12.4°±10.5°, respectively, for the axial registration. Although the performance of the proposed method does not match that of the state-of-the-art, our method relies on computationally lighter steps for its computations, which is crucial in real-time applications. On the other hand, the proposed method performs even or better that the state-of-the-art when considering the axial registration. The results indicate that the proposed method can support clinical decision making and diagnosis based on sequential imaging examinations

    OCT sequence registration before and after percutaneous coronary intervention (stent implantation)

    No full text
    To assess the progression of coronary artery disease, Optical Coherence Tomography (OCT) pullbacks acquired at different timepoints should be compared. However, the assessment of temporal sequences is a difficult task, as motion artifacts in the longitudinal and axial plane can decrease the quality of the manual inspection. To address this challenge, the current study presents a two-stage computational framework for the longitudinal and axial registration of two OCT pullbacks. During the first stage of the process, we focus on the accurate detection of the matching image pairs from the respective series, while during the second stage we focus on the axial registration of the matched pairs so that their common features are aligned. The dataset used includes 19 patients from two clinical centers, with two OCT pullbacks per patient: one before the stent implantation procedure and one after it. We applied our method on a synthetic dataset of OCT pullbacks, which was generated based on the in-vivo OCT pullbacks to reproduce the motion artifacts across the planes. In addition, the proposed method was validated on the 19 pairs of in-vivo OCT pullbacks with annotated pre/post stent deployment data. The method was able to reduce the alignment error from 32.17±26.14 to 5.6±6.6 frames, the rotational error from 11.59°±11.22° to 1.18°±0.81° and the distance error from 3.07mm±1.52mm to 0.46mm±0.44mm. In addition, the mean Mutual Information similarity increased by 13.47% after the longitudinal registration and an additional 123.33% after the axial registration on top of the previous one
    corecore