15 research outputs found

    Supervised deformable image registration using deep neural networks

    No full text

    Pulmonary CT registration through supervised learning with convolutional neural networks

    Get PDF
    \u3cp\u3eDeformable image registration can be time consuming and often needs extensive parameterization to perform well on a specific application. We present a deformable registration method based on a 3-D convolutional neural network, together with a framework for training such a network. The network directly learns transformations between pairs of 3-D images. The network is trained on synthetic random transformations which are applied to a small set of representative images for the desired application. Training, therefore, does not require manually annotated ground truth information on the deformation. The framework for the generation of transformations for training uses a sequence of multiple transformations at different scales that are applied to the image. This way, complex transformations with large displacements can be modeled without folding or tearing images. The methodology is demonstrated on public data sets of inhale-exhale lung CT image pairs which come with landmarks for evaluation of the registration quality. We show that a small training set can be used to train the network, while still allowing generalization to a separate pulmonary CT data set containing data from a different patient group, acquired using a different scanner and scan protocol. This approach results in an accurate and very fast deformable registration method, without a requirement for parameterization at test time or manually annotated data for training.\u3c/p\u3

    Error estimation of deformable image registration of pulmonary CT scans using convolutional neural networks

    Get PDF
    \u3cp\u3eError estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.\u3c/p\u3

    Supervised local error estimation for nonlinear image registration using convolutional neural networks

    No full text
    \u3cp\u3eError estimation in medical image registration is valuable when validating, comparing, or combining registration methods. To validate a nonlinear image registration method, ideally the registration error should be known for the entire image domain. We propose a supervised method for the estimation of a registration error map for nonlinear image registration. The method is based on a convolutional neural network that estimates the norm of the residual deformation from patches around each pixel in two registered images. This norm is interpreted as the registration error, and is defined for every pixel in the image domain. The network is trained using a set of artificially deformed images. Each training example is a pair of images: the original image, and a random deformation of that image. No manually labeled ground truth error is required. At test time, only the two registered images are required as input. We train and validate the network on registrations in a set of 2D digital subtraction angiography sequences, such that errors up to eight pixels can be estimated. We show that for this range of errors the convolutional network is able to learn the registration error in pairs of 2D registered images at subpixel precision. Finally, we present a proof of principle for the extension to 3D registration problems in chest CTs, showing that the method has the potential to estimate errors in 3D registration problems.\u3c/p\u3

    Progressively growing convolutional networks for end-to-end deformable image registration

    Get PDF
    Deformable image registration is often a slow process when using conventional methods. To speed up deformable registration, there is growing interest in using convolutional neural networks. They are comparatively fast and can be trained to estimate full-resolution deformation fields directly from pairs of images. Because deep learning-based registration methods often require rigid or affine pre-registration of the images, they do not perform true end-to-end image registration. To address this, we propose a progressive training method for end-to-end image registration with convolutional networks. The network is first trained to find large deformations at a low resolution using a smaller part of the full architecture. The network is then gradually expanded during training by adding higher resolution layers that allow the network to learn more fine-grained deformations from higher resolution data. By starting at a lower resolution, the network is able to learn larger deformations more quickly at the start of training, making pre-registration redundant. We apply this method to pulmonary CT data, and use it to register inhalation to exhalation images. We train the network using the CREATIS pulmonary CT data set, and apply the trained network to register the DIRLAB pulmonary CT data set. By computing the target registration error at corresponding landmarks we show that the error for end-to-end registration is significantly reduced by using progressive training, while retaining sub-second registration times

    The truth is hard to make:validation of medical image registration

    No full text
    \u3cp\u3eAn unsolved problem in medical image analysis is validation of methods. In this paper we will focus on image registration and in particular on nonlinear image registration, which is one of the hardest analysis problems to validate. The paper covers currently used methods of validation, comparative challenges and public datasets, as well as some of our own work in this area.\u3c/p\u3

    Decision fusion for temporal prediction of respiratory liver motion

    No full text
    \u3cp\u3eTemporal prediction of respiratory motion is required due to the latencies in image-guided therapy systems. In this study we propose to combine the outcome of four temporal prediction methods, which have different strength and weaknesses, by taking their median. Based on 25 motion traces from ultrasound liver tracking, this decision fusion provided statistically significantly better results than the individual outcomes for latencies from 150 to 1000ms. On average, RMS errors reduced by at least 50% in comparison to assuming no motion for all latencies. Furthermore it was shown that time-intensive optimization of the methods parameters to individual cases was not required, as performance from using the median parameters from the population was not significantly worse when using decision fusion.\u3c/p\u3

    Learning domain-invariant representations of histological images

    No full text
    Histological images present high appearance variability due to inconsistent latent parameters related to the preparation and scanning procedure of histological slides, as well as the inherent biological variability of tissues. Machine-learning models are trained with images from a limited set of domains, and are expected to generalize to images from unseen domains. Methodological design choices have to be made in order to yield domain invariance and proper generalization. In digital pathology, standard approaches focus either on ad-hoc normalization of the latent parameters based on prior knowledge, such as staining normalization, or aim at anticipating new variations of these parameters via data augmentation. Since every histological image originates from a unique data distribution, we propose to consider every histological slide of the training data as a domain and investigated the alternative approach of domain-adversarial training to learn features that are invariant to this available domain information. We carried out a comparative analysis with staining normalization and data augmentation on two different tasks: generalization to images acquired in unseen pathology labs for mitosis detection and generalization to unseen organs for nuclei segmentation. We report that the utility of each method depends on the type of task and type of data variability present at training and test time. The proposed framework for domain-adversarial training is able to improve generalization performances on top of conventional methods

    Progressively trained convolutional neural networks for deformable image registration

    Get PDF
    \u3cp\u3eDeep learning-based methods for deformable image registration are attractive alternatives to conventional registration methods because of their short registration times. However, these methods often fail to estimate larger displacements in complex deformation fields, for which a multi-resolution strategy is required. In this article, we propose to train neural networks progressively to address this problem. Instead of training a large convolutional neural network on the registration task all at once, we initially train smaller versions of the network on lower resolution versions of the images and deformation fields. During training, we progressively expand the network with additional layers that are trained on higher resolution data. We show that this way of training allows a network to learn larger displacements without sacrificing registration accuracy and that the resulting network is less sensitive to large misregistrations compared to training the full network all at once. We generate a large number of ground truth example data by applying random synthetic transformations to a training set of images, and test the network on the problem of intrapatient lung CT registration. We analyze the learned representations in the progressively growing network to assess how the progressive learning strategy influences training. Finally, we show that a progressive training procedure leads to improved registration accuracy when learning large and complex deformations.\u3c/p\u3

    Deformable image registration using convolutional neural networks

    No full text
    \u3cp\u3eDeformable image registration can be time-consuming and often needs extensive parameterization to perform well on a specific application. We present a step towards a registration framework based on a three-dimensional convolutional neural network. The network directly learns transformations between pairs of three-dimensional images. The outputs of the network are three maps for the x, y, and z components of a thin plate spline transformation grid. The network is trained on synthetic random transformations, which are applied to a small set of representative images for the desired application. Training therefore does not require manually annotated ground truth deformation information. The methodology is demonstrated on public data sets of inspiration-expiration lung CT image pairs, which come with annotated corresponding landmarks for evaluation of the registration accuracy. Advantages of this methodology are its fast registration times and its minimal parameterization.\u3c/p\u3
    corecore