919 research outputs found
Convolutional neural network stacking for medical image segmentation in CT scans
Computed tomography (CT) data poses many challenges to medical image segmentation based on convolutional neural networks (CNNs). The main challenges in handling CT scans with CNN are the scale of data (large range of Hounsfield Units) and the processing of the slices. In this paper, we consider a framework, which addresses these demands regarding the data preprocessing, the data augmentation, and the CNN architecture itself. For this purpose, we present a data preprocessing and an augmentation method tailored to CT data. We evaluate and compare different input dimensionalities and two different CNN architectures. One of the architectures is a modified U-Net and the other a modified Mixed-Scale Dense Network (MS-D Net). Thus, we compare dilated convolutions for parallel multi-scale processing to the U-Net approach with traditional scaling operations based on the different input dimensionalities. Finally, we combine a set of 3D modified MS-D Nets and a set of 2D modified U-Nets as a stacked CNN-model to combine the different strengths of both model
Dilated Convolutional Neural Networks for Cardiovascular MR Segmentation in Congenital Heart Disease
We propose an automatic method using dilated convolutional neural networks
(CNNs) for segmentation of the myocardium and blood pool in cardiovascular MR
(CMR) of patients with congenital heart disease (CHD).
Ten training and ten test CMR scans cropped to an ROI around the heart were
provided in the MICCAI 2016 HVSMR challenge. A dilated CNN with a receptive
field of 131x131 voxels was trained for myocardium and blood pool segmentation
in axial, sagittal and coronal image slices. Performance was evaluated within
the HVSMR challenge.
Automatic segmentation of the test scans resulted in Dice indices of
0.800.06 and 0.930.02, average distances to boundaries of
0.960.31 and 0.890.24 mm, and Hausdorff distances of 6.133.76
and 7.073.01 mm for the myocardium and blood pool, respectively.
Segmentation took 41.514.7 s per scan.
In conclusion, dilated CNNs trained on a small set of CMR images of CHD
patients showing large anatomical variability provide accurate myocardium and
blood pool segmentations
A Deep Learning Framework for Unsupervised Affine and Deformable Image Registration
Image registration, the process of aligning two or more images, is the core
technique of many (semi-)automatic medical image analysis tasks. Recent studies
have shown that deep learning methods, notably convolutional neural networks
(ConvNets), can be used for image registration. Thus far training of ConvNets
for registration was supervised using predefined example registrations.
However, obtaining example registrations is not trivial. To circumvent the need
for predefined examples, and thereby to increase convenience of training
ConvNets for image registration, we propose the Deep Learning Image
Registration (DLIR) framework for \textit{unsupervised} affine and deformable
image registration. In the DLIR framework ConvNets are trained for image
registration by exploiting image similarity analogous to conventional
intensity-based image registration. After a ConvNet has been trained with the
DLIR framework, it can be used to register pairs of unseen images in one shot.
We propose flexible ConvNets designs for affine image registration and for
deformable image registration. By stacking multiple of these ConvNets into a
larger architecture, we are able to perform coarse-to-fine image registration.
We show for registration of cardiac cine MRI and registration of chest CT that
performance of the DLIR framework is comparable to conventional image
registration while being several orders of magnitude faster.Comment: Accepted: Medical Image Analysis - Elsevie
- …