196 research outputs found

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries

    Get PDF
    This two-volume set LNCS 12962 and 12963 constitutes the thoroughly refereed proceedings of the 7th International MICCAI Brainlesion Workshop, BrainLes 2021, as well as the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge, the Federated Tumor Segmentation (FeTS) Challenge, the Cross-Modality Domain Adaptation (CrossMoDA) Challenge, and the challenge on Quantification of Uncertainties in Biomedical Image Quantification (QUBIQ). These were held jointly at the 23rd Medical Image Computing for Computer Assisted Intervention Conference, MICCAI 2020, in September 2021. The 91 revised papers presented in these volumes were selected form 151 submissions. Due to COVID-19 pandemic the conference was held virtually. This is an open access book

    Automated segmentation of colorectal tumor in 3D MRI Using 3D multiscale densely connected convolutional neural network

    Get PDF
    The main goal of this work is to automatically segment colorectal tumors in 3D T2-weighted (T2w) MRI with reasonable accuracy. For such a purpose, a novel deep learning-based algorithm suited for volumetric colorectal tumor segmentation is proposed. The proposed CNN architecture, based on densely connected neural network, contains multiscale dense interconnectivity between layers of fine and coarse scales, thus leveraging multiscale contextual information in the network to get better flow of information throughout the network. Additionally, the 3D level-set algorithm was incorporated as a postprocessing task to refine contours of the network predicted segmentation. The method was assessed on T2-weighted 3D MRI of 43 patients diagnosed with locally advanced colorectal tumor (cT3/T4). Cross validation was performed in 100 rounds by partitioning the dataset into 30 volumes for training and 13 for testing. Three performance metrics were computed to assess the similarity between predicted segmentation and the ground truth (i.e., manual segmentation by an expert radiologist/oncologist), including Dice similarity coefficient (DSC), recall rate (RR), and average surface distance (ASD). The above performance metrics were computed in terms of mean and standard deviation (mean ± standard deviation). The DSC, RR, and ASD were 0.8406 ± 0.0191, 0.8513 ± 0.0201, and 2.6407 ± 2.7975 before postprocessing, and these performance metrics became 0.8585 ± 0.0184, 0.8719 ± 0.0195, and 2.5401 ± 2.402 after postprocessing, respectively. We compared our proposed method to other existing volumetric medical image segmentation baseline methods (particularly 3D U-net and DenseVoxNet) in our segmentation tasks. The experimental results reveal that the proposed method has achieved better performance in colorectal tumor segmentation in volumetric MRI than the other baseline techniques

    On Medical Image Segmentation and on Modeling Long Term Dependencies

    Get PDF
    La délimitation (segmentation) des tumeurs malignes à partir d’images médicales est importante pour le diagnostic du cancer, la planification des traitements ciblés, ainsi que les suivis de la progression du cancer et de la réponse aux traitements. Cependant, bien que la segmentation manuelle des images médicales soit précise, elle prend du temps, nécessite des opérateurs experts et est souvent peu pratique lorsque de grands ensembles de données sont utilisés. Ceci démontre la nécessité d’une segmentation automatique. Cependant, la segmentation automatisée des tumeurs est particulièrement difficile en raison de la variabilité de l’apparence des tumeurs, de l’équipement d’acquisition d’image et des paramètres d’acquisition, et de la variabilité entre les patients. Les tumeurs varient en type, taille, emplacement et quantité; le reste de l’image varie en raison des différences anatomiques entre les patients, d’une chirurgie antérieure ou d’une thérapie ablative, de différences dans l’amélioration du contraste des tissus et des artefacts d’image. De plus, les protocoles d’acquisition du scanner varient considérablement entre les cliniques et les caractéristiques de l’image varient selon le modèle du scanner. En raison de toutes ces variabilités, un modèle de segmentation doit être suffisamment flexible pour apprendre les caractéristiques générales des données. L’avènement des réseaux profonds de neurones à convolution (convolutional neural networks, CNN) a permis une classification exacte et précise des images hautement variables et, par extension, une segmentation de haute qualité des images. Cependant, ces modèles doivent être formés sur d’énormes quantités de données étiquetées. Cette contrainte est particulièrement difficile dans le contexte de la segmentation des images médicales, car le nombre de segmentations pouvant être produites est limité dans la pratique par la nécessité d’employer des opérateurs experts pour réaliser un tel étiquetage. De plus, les variabilités d’intérêt dans les images médicales semblent suivre une distribution à longue traîne, ce qui signifie qu’un nombre particulièrement important de données utilisées pour l’entraînement peut être nécessaire pour fournir un échantillon suffisant de chaque type de variabilité à un CNN. Cela démontre la nécessité de développer des stratégies pour la formation de ces modèles avec des segmentations de vérité-terrain disponibles limitées.----------ABSTRACT: The delineation (segmentation) of malignant tumours in medical images is important for cancer diagnosis, the planning of targeted treatments, and the tracking of cancer progression and treatment response. However, although manual segmentation of medical images is accurate, it is time consuming, requires expert operators, and is often impractical with large datasets. This motivates the need for training automated segmentation. However, automated segmentation of tumours is particularly challenging due to variability in tumour appearance, image acquisition equipment and acquisition parameters, and variability across patients. Tumours vary in type, size, location, and quantity; the rest of the image varies due to anatomical differences between patients, prior surgery or ablative therapy, differences in contrast enhancement of tissues, and image artefacts. Furthermore, scanner acquisition protocols vary considerably between clinical sites and image characteristics vary according to the scanner model. Due to all of these variabilities, a segmentation model must be flexible enough to learn general features from the data. The advent of deep convolutional neural networks (CNN) allowed for accurate and precise classification of highly variable images and, by extension, of high quality segmentation images. However, these models must be trained on enormous quantities of labeled data. This constraint is particularly challenging in the context of medical image segmentation because the number of segmentations that can be produced is limited in practice by the need to employ expert operators to do such labeling. Furthermore, the variabilities of interest in medical images appear to follow a long tail distribution, meaning a particularly large amount of training data may be required to provide a sufficient sample of each type of variability to a CNN. This motivates the need to develop strategies for training these models with limited ground truth segmentations available

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements
    • …
    corecore