5,345 research outputs found
DRINet for medical image segmentation
Convolutional neural networks (CNNs) have revolutionized medical image analysis over the past few years. The UNet architecture is one of the most well-known CNN architectures for semantic segmentation and has achieved remarkable successes in many different medical image segmentation applications. The U-Net architecture consists of standard convolution layers, pooling layers, and upsampling layers. These convolution layers learn representative features of input images and construct segmentations based on the features. However, the features learned by standard convolution layers are not distinctive when the differences among different categories are subtle in terms of intensity, location, shape, and size. In this paper, we propose a novel CNN architecture, called Dense-Res-Inception Net (DRINet), which addresses this challenging problem. The proposed DRINet consists of three blocks, namely a convolutional block with dense connections, a deconvolutional block with residual Inception modules, and an unpooling block. Our proposed architecture outperforms the U-Net in three different challenging applications, namely multi-class segmentation of cerebrospinal fluid (CSF) on brain CT images, multi-organ segmentation on abdominal CT images, multi-class brain tumour segmentation on MR images
Keypoint Transfer for Fast Whole-Body Segmentation
We introduce an approach for image segmentation based on sparse
correspondences between keypoints in testing and training images. Keypoints
represent automatically identified distinctive image locations, where each
keypoint correspondence suggests a transformation between images. We use these
correspondences to transfer label maps of entire organs from the training
images to the test image. The keypoint transfer algorithm includes three steps:
(i) keypoint matching, (ii) voting-based keypoint labeling, and (iii)
keypoint-based probabilistic transfer of organ segmentations. We report
segmentation results for abdominal organs in whole-body CT and MRI, as well as
in contrast-enhanced CT and MRI. Our method offers a speed-up of about three
orders of magnitude in comparison to common multi-atlas segmentation, while
achieving an accuracy that compares favorably. Moreover, keypoint transfer does
not require the registration to an atlas or a training phase. Finally, the
method allows for the segmentation of scans with highly variable field-of-view.Comment: Accepted for publication at IEEE Transactions on Medical Imagin
Automatic Multi-organ Segmentation on Abdominal CT with Dense V-networks
Automatic segmentation of abdominal anatomy on computed tomography (CT) images can support diagnosis, treatment planning and treatment delivery workflows. Segmentation methods using statistical models and multi-atlas label fusion (MALF) require inter-subject image registrations which are challenging for abdominal images, but alternative methods without registration have not yet achieved higher accuracy for most abdominal organs. We present a registration-free deeplearning- based segmentation algorithm for eight organs that are relevant for navigation in endoscopic pancreatic and biliary procedures, including the pancreas, the GI tract (esophagus, stomach, duodenum) and surrounding organs (liver, spleen, left kidney, gallbladder). We directly compared the segmentation accuracy of the proposed method to existing deep learning and MALF methods in a cross-validation on a multi-centre data set with 90 subjects. The proposed method yielded significantly higher Dice scores for all organs and lower mean absolute distances for most organs, including Dice scores of 0.78 vs. 0.71, 0.74 and 0.74 for the pancreas, 0.90 vs 0.85, 0.87 and 0.83 for the stomach and 0.76 vs 0.68, 0.69 and 0.66 for the esophagus. We conclude that deep-learning-based segmentation represents a registration-free method for multi-organ abdominal CT segmentation whose accuracy can surpass current methods, potentially supporting image-guided navigation in gastrointestinal endoscopy procedures
DeepOrgan: Multi-level Deep Convolutional Networks for Automated Pancreas Segmentation
Automatic organ segmentation is an important yet challenging problem for
medical image analysis. The pancreas is an abdominal organ with very high
anatomical variability. This inhibits previous segmentation methods from
achieving high accuracies, especially compared to other organs such as the
liver, heart or kidneys. In this paper, we present a probabilistic bottom-up
approach for pancreas segmentation in abdominal computed tomography (CT) scans,
using multi-level deep convolutional networks (ConvNets). We propose and
evaluate several variations of deep ConvNets in the context of hierarchical,
coarse-to-fine classification on image patches and regions, i.e. superpixels.
We first present a dense labeling of local image patches via
and nearest neighbor fusion. Then we describe a regional
ConvNet () that samples a set of bounding boxes around
each image superpixel at different scales of contexts in a "zoom-out" fashion.
Our ConvNets learn to assign class probabilities for each superpixel region of
being pancreas. Last, we study a stacked leveraging
the joint space of CT intensities and the dense
probability maps. Both 3D Gaussian smoothing and 2D conditional random fields
are exploited as structured predictions for post-processing. We evaluate on CT
images of 82 patients in 4-fold cross-validation. We achieve a Dice Similarity
Coefficient of 83.66.3% in training and 71.810.7% in testing.Comment: To be presented at MICCAI 2015 - 18th International Conference on
Medical Computing and Computer Assisted Interventions, Munich, German
Towards image-guided pancreas and biliary endoscopy: Automatic multi-organ segmentation on abdominal CT with dense dilated networks
Segmentation of anatomy on abdominal CT enables patient-specific image guidance in clinical endoscopic procedures and in endoscopy training. Because robust interpatient registration of abdominal images is necessary for existing multi-atlas- and statistical-shape-model-based segmentations, but remains challenging, there is a need for automated multi-organ segmentation that does not rely on registration. We present a deep-learning-based algorithm for segmenting the liver, pancreas, stomach, and esophagus using dilated convolution units with dense skip connections and a new spatial prior. The algorithm was evaluated with an 8-fold cross-validation and compared to a joint-label-fusion-based segmentation based on Dice scores and boundary distances. The proposed algorithm yielded more accurate segmentations than the joint-label-fusion-ba sed algorithm for the pancreas (median Dice scores 66 vs 37), stomach (83 vs 72) and esophagus (73 vs 54) and marginally less accurate segmentation for the liver (92 vs 93). We conclude that dilated convolutional networks with dense skip connections can segment the liver, pancreas, stomach and esophagus from abdominal CT without image registration and have the potential to support image-guided navigation in gastrointestinal endoscopy procedures
Unpaired multi-modal segmentation via knowledge distillation
Multi-modal learning is typically performed with network architectures containing modality-specific layers and shared layers, utilizing co-registered images of different modalities. We propose a novel learning scheme for unpaired cross-modality image segmentation, with a highly compact architecture achieving superior segmentation accuracy. In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI, and only employ modality-specific internal normalization layers which compute respective statistics. To effectively train such a highly compact model, we introduce a novel loss term inspired by knowledge distillation, by explicitly constraining the KL-divergence of our derived prediction distributions between modalities. We have extensively validated our approach on two multi-class segmentation problems: i) cardiac structure segmentation, and ii) abdominal organ segmentation. Different network settings, i.e., 2D dilated network and 3D U-net, are utilized to investigate our method's general efficacy. Experimental results on both tasks demonstrate that our novel multi-modal learning scheme consistently outperforms single-modal training and previous multi-modal approaches
- …