1,611 research outputs found
Interactive Medical Image Segmentation using Deep Learning with Image-specific Fine-tuning
Convolutional neural networks (CNNs) have achieved state-of-the-art
performance for automatic medical image segmentation. However, they have not
demonstrated sufficiently accurate and robust results for clinical use. In
addition, they are limited by the lack of image-specific adaptation and the
lack of generalizability to previously unseen object classes. To address these
problems, we propose a novel deep learning-based framework for interactive
segmentation by incorporating CNNs into a bounding box and scribble-based
segmentation pipeline. We propose image-specific fine-tuning to make a CNN
model adaptive to a specific test image, which can be either unsupervised
(without additional user interactions) or supervised (with additional
scribbles). We also propose a weighted loss function considering network and
interaction-based uncertainty for the fine-tuning. We applied this framework to
two applications: 2D segmentation of multiple organs from fetal MR slices,
where only two types of these organs were annotated for training; and 3D
segmentation of brain tumor core (excluding edema) and whole brain tumor
(including edema) from different MR sequences, where only tumor cores in one MR
sequence were annotated for training. Experimental results show that 1) our
model is more robust to segment previously unseen objects than state-of-the-art
CNNs; 2) image-specific fine-tuning with the proposed weighted loss function
significantly improves segmentation accuracy; and 3) our method leads to
accurate results with fewer user interactions and less user time than
traditional interactive segmentation methods.Comment: 11 pages, 11 figure
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
3D Convolutional Neural Networks for Tumor Segmentation using Long-range 2D Context
We present an efficient deep learning approach for the challenging task of
tumor segmentation in multisequence MR images. In recent years, Convolutional
Neural Networks (CNN) have achieved state-of-the-art performances in a large
variety of recognition tasks in medical imaging. Because of the considerable
computational cost of CNNs, large volumes such as MRI are typically processed
by subvolumes, for instance slices (axial, coronal, sagittal) or small 3D
patches. In this paper we introduce a CNN-based model which efficiently
combines the advantages of the short-range 3D context and the long-range 2D
context. To overcome the limitations of specific choices of neural network
architectures, we also propose to merge outputs of several cascaded 2D-3D
models by a voxelwise voting strategy. Furthermore, we propose a network
architecture in which the different MR sequences are processed by separate
subnetworks in order to be more robust to the problem of missing MR sequences.
Finally, a simple and efficient algorithm for training large CNN models is
introduced. We evaluate our method on the public benchmark of the BRATS 2017
challenge on the task of multiclass segmentation of malignant brain tumors. Our
method achieves good performances and produces accurate segmentations with
median Dice scores of 0.918 (whole tumor), 0.883 (tumor core) and 0.854
(enhancing core). Our approach can be naturally applied to various tasks
involving segmentation of lesions or organs.Comment: Submitted to the journal Computerized Medical Imaging and Graphic
Automated liver tissues delineation based on machine learning techniques: A survey, current trends and future orientations
There is no denying how machine learning and computer vision have grown in
the recent years. Their highest advantages lie within their automation,
suitability, and ability to generate astounding results in a matter of seconds
in a reproducible manner. This is aided by the ubiquitous advancements reached
in the computing capabilities of current graphical processing units and the
highly efficient implementation of such techniques. Hence, in this paper, we
survey the key studies that are published between 2014 and 2020, showcasing the
different machine learning algorithms researchers have used to segment the
liver, hepatic-tumors, and hepatic-vasculature structures. We divide the
surveyed studies based on the tissue of interest (hepatic-parenchyma,
hepatic-tumors, or hepatic-vessels), highlighting the studies that tackle more
than one task simultaneously. Additionally, the machine learning algorithms are
classified as either supervised or unsupervised, and further partitioned if the
amount of works that fall under a certain scheme is significant. Moreover,
different datasets and challenges found in literature and websites, containing
masks of the aforementioned tissues, are thoroughly discussed, highlighting the
organizers original contributions, and those of other researchers. Also, the
metrics that are used excessively in literature are mentioned in our review
stressing their relevancy to the task at hand. Finally, critical challenges and
future directions are emphasized for innovative researchers to tackle, exposing
gaps that need addressing such as the scarcity of many studies on the vessels
segmentation challenge, and why their absence needs to be dealt with in an
accelerated manner.Comment: 41 pages, 4 figures, 13 equations, 1 table. A review paper on liver
tissues segmentation based on automated ML-based technique
UNet++: A Nested U-Net Architecture for Medical Image Segmentation
In this paper, we present UNet++, a new, more powerful architecture for
medical image segmentation. Our architecture is essentially a deeply-supervised
encoder-decoder network where the encoder and decoder sub-networks are
connected through a series of nested, dense skip pathways. The re-designed skip
pathways aim at reducing the semantic gap between the feature maps of the
encoder and decoder sub-networks. We argue that the optimizer would deal with
an easier learning task when the feature maps from the decoder and encoder
networks are semantically similar. We have evaluated UNet++ in comparison with
U-Net and wide U-Net architectures across multiple medical image segmentation
tasks: nodule segmentation in the low-dose CT scans of chest, nuclei
segmentation in the microscopy images, liver segmentation in abdominal CT
scans, and polyp segmentation in colonoscopy videos. Our experiments
demonstrate that UNet++ with deep supervision achieves an average IoU gain of
3.9 and 3.4 points over U-Net and wide U-Net, respectively.Comment: 8 pages, 3 figures, 3 tables, accepted by 4th Deep Learning in
Medical Image Analysis (DLMIA) Worksho
Effective Brain Tumor Classification Using Deep Residual Network-Based Transfer Learning
Brain tumor classification is an essential task in medical image processing that provides assistance to doctors for accurate diagnoses and treatment plans. A Deep Residual Network based Transfer Learning to a fully convoluted Convolutional Neural Network (CNN) is proposed to perform brain tumor classification of Magnetic Resonance Images (MRI) from the BRATS 2020 dataset. The dataset consists of a variety of pre-operative MRI scans to segment integrally varied brain tumors in appearance, shape, and histology, namely gliomas. A Deep Residual Network (ResNet-50) to a fully convoluted CNN is proposed to perform tumor classification from MRI of the BRATS dataset. The 50-layered residual network deeply convolutes the multi-category of tumor images in classification tasks using convolution block and identity block. Limitations such as Limited accuracy and complexity of algorithms in CNN-based ME-Net, and classification issues in YOLOv2 inceptions are resolved by the proposed model in this work. The trained CNN learns boundary and region tasks and extracts successful contextual information from MRI scans with minimal computation cost. The tumor segmentation and classification are performed in one step using a U-Net architecture, which helps retain spatial features of the image. The multimodality fusion is implemented to perform classification and regression tasks by integrating dataset information. The dice scores of the proposed model for Enhanced Tumor (ET), Whole Tumor (WT), and Tumor Core (TC) are 0.88, 0.97, and 0.90 on the BRATS 2020 dataset, and also resulted in 99.94% accuracy, 98.92% sensitivity, 98.63% specificity, and 99.94% precision
- …