582 research outputs found

    Three-dimensional image classification using hierarchical spatial decomposition: A study using retinal data

    Get PDF
    This thesis describes research conducted in the field of image mining especially volumetric image mining. The study investigates volumetric representation techniques based on hierarchical spatial decomposition to classify three-dimensional (3D) images. The aim of this study was to investigate the effectiveness of using hierarchical spatial decomposition coupled with regional homogeneity in the context of volumetric data representation. The proposed methods involve the following: (i) decomposition, (ii) representation, (iii) single feature vector generation and (iv) classifier generation. In the decomposition step, a given image (volume) is recursively decomposed until either homogeneous regions or a predefined maximum level are reached. For measuring the regional homogeneity, different critical functions are proposed. These critical functions are based on histograms of a given region. Once the image is decomposed, two representation methods are proposed: (i) to represent the decomposition using regions identified in the decomposition (region-based) or (ii) to represent the entire decomposition (whole image-based). The first method is based on individual regions, whereby each decomposed sub-volume (region) is represented in terms of different statistical and histogram-based techniques. Feature vector generation techniques are used to convert the set of feature vectors for each sub-volume into a single feature vector. In the whole image-based representation method, a tree is used to represent each image. Each node in the tree represents a region (sub-volume) using a single value and each edge describes the difference between the node and its parent node. A frequent sub-tree mining technique was adapted to identified a set of frequent sub-graphs. Selected sub-graphs are then used to build a feature vector for each image. In both cases, a standard classifier generator is applied, to the generated feature vectors, to model and predict the class of each image. Evaluation was conducted with respect to retinal optical coherence tomography images in terms of identifying Age-related Macular Degeneration (AMD). Two types of evaluation were used: (i) classification performance evaluation and (ii) statistical significance testing using ANalysis Of VAriance (ANOVA). The evaluation revealed that the proposed methods were effective for classifying 3D retinal images. It is consequently argued that the approaches are generic

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Digital ocular fundus imaging: a review

    Get PDF
    Ocular fundus imaging plays a key role in monitoring the health status of the human eye. Currently, a large number of imaging modalities allow the assessment and/or quantification of ocular changes from a healthy status. This review focuses on the main digital fundus imaging modality, color fundus photography, with a brief overview of complementary techniques, such as fluorescein angiography. While focusing on two-dimensional color fundus photography, the authors address the evolution from nondigital to digital imaging and its impact on diagnosis. They also compare several studies performed along the transitional path of this technology. Retinal image processing and analysis, automated disease detection and identification of the stage of diabetic retinopathy (DR) are addressed as well. The authors emphasize the problems of image segmentation, focusing on the major landmark structures of the ocular fundus: the vascular network, optic disk and the fovea. Several proposed approaches for the automatic detection of signs of disease onset and progression, such as microaneurysms, are surveyed. A thorough comparison is conducted among different studies with regard to the number of eyes/subjects, imaging modality, fundus camera used, field of view and image resolution to identify the large variation in characteristics from one study to another. Similarly, the main features of the proposed classifications and algorithms for the automatic detection of DR are compared, thereby addressing computer-aided diagnosis and computer-aided detection for use in screening programs.Fundação para a Ciência e TecnologiaFEDErPrograma COMPET

    The 7th Conference of PhD Students in Computer Science

    Get PDF

    Deep Representation Learning with Limited Data for Biomedical Image Synthesis, Segmentation, and Detection

    Get PDF
    Biomedical imaging requires accurate expert annotation and interpretation that can aid medical staff and clinicians in automating differential diagnosis and solving underlying health conditions. With the advent of Deep learning, it has become a standard for reaching expert-level performance in non-invasive biomedical imaging tasks by training with large image datasets. However, with the need for large publicly available datasets, training a deep learning model to learn intrinsic representations becomes harder. Representation learning with limited data has introduced new learning techniques, such as Generative Adversarial Networks, Semi-supervised Learning, and Self-supervised Learning, that can be applied to various biomedical applications. For example, ophthalmologists use color funduscopy (CF) and fluorescein angiography (FA) to diagnose retinal degenerative diseases. However, fluorescein angiography requires injecting a dye, which can create adverse reactions in the patients. So, to alleviate this, a non-invasive technique needs to be developed that can translate fluorescein angiography from fundus images. Similarly, color funduscopy and optical coherence tomography (OCT) are also utilized to semantically segment the vasculature and fluid build-up in spatial and volumetric retinal imaging, which can help with the future prognosis of diseases. Although many automated techniques have been proposed for medical image segmentation, the main drawback is the model's precision in pixel-wise predictions. Another critical challenge in the biomedical imaging field is accurately segmenting and quantifying dynamic behaviors of calcium signals in cells. Calcium imaging is a widely utilized approach to studying subcellular calcium activity and cell function; however, large datasets have yielded a profound need for fast, accurate, and standardized analyses of calcium signals. For example, image sequences from calcium signals in colonic pacemaker cells ICC (Interstitial cells of Cajal) suffer from motion artifacts and high periodic and sensor noise, making it difficult to accurately segment and quantify calcium signal events. Moreover, it is time-consuming and tedious to annotate such a large volume of calcium image stacks or videos and extract their associated spatiotemporal maps. To address these problems, we propose various deep representation learning architectures that utilize limited labels and annotations to address the critical challenges in these biomedical applications. To this end, we detail our proposed semi-supervised, generative adversarial networks and transformer-based architectures for individual learning tasks such as retinal image-to-image translation, vessel and fluid segmentation from fundus and OCT images, breast micro-mass segmentation, and sub-cellular calcium events tracking from videos and spatiotemporal map quantification. We also illustrate two multi-modal multi-task learning frameworks with applications that can be extended to other domains of biomedical applications. The main idea is to incorporate each of these as individual modules to our proposed multi-modal frameworks to solve the existing challenges with 1) Fluorescein angiography synthesis, 2) Retinal vessel and fluid segmentation, 3) Breast micro-mass segmentation, and 4) Dynamic quantification of calcium imaging datasets
    • …
    corecore