74 research outputs found
Crowd disagreement of medical images is informative
\u3cp\u3eClassifiers for medical image analysis are often trained with a single consensus label, based on combining labels given by experts or crowds. However, disagreement between annotators may be informative, and thus removing it may not be the best strategy. As a proof of concept, we predict whether a skin lesion from the ISIC 2017 dataset is a melanoma or not, based on crowd annotations of visual characteristics of that lesion. We compare using the mean annotations, illustrating consensus, to standard deviations and other distribution moments, illustrating disagreement. We show that the mean annotations perform best, but that the disagreement measures are still informative. We also make the crowd annotations used in this paper available at https://figshare.com/s/5cbbce14647b66286544.\u3c/p\u3
Pulmonary CT registration through supervised learning with convolutional neural networks
\u3cp\u3eDeformable image registration can be time consuming and often needs extensive parameterization to perform well on a specific application. We present a deformable registration method based on a 3-D convolutional neural network, together with a framework for training such a network. The network directly learns transformations between pairs of 3-D images. The network is trained on synthetic random transformations which are applied to a small set of representative images for the desired application. Training, therefore, does not require manually annotated ground truth information on the deformation. The framework for the generation of transformations for training uses a sequence of multiple transformations at different scales that are applied to the image. This way, complex transformations with large displacements can be modeled without folding or tearing images. The methodology is demonstrated on public data sets of inhale-exhale lung CT image pairs which come with landmarks for evaluation of the registration quality. We show that a small training set can be used to train the network, while still allowing generalization to a separate pulmonary CT data set containing data from a different patient group, acquired using a different scanner and scan protocol. This approach results in an accurate and very fast deformable registration method, without a requirement for parameterization at test time or manually annotated data for training.\u3c/p\u3
Supervised local error estimation for nonlinear image registration using convolutional neural networks
\u3cp\u3eError estimation in medical image registration is valuable when validating, comparing, or combining registration methods. To validate a nonlinear image registration method, ideally the registration error should be known for the entire image domain. We propose a supervised method for the estimation of a registration error map for nonlinear image registration. The method is based on a convolutional neural network that estimates the norm of the residual deformation from patches around each pixel in two registered images. This norm is interpreted as the registration error, and is defined for every pixel in the image domain. The network is trained using a set of artificially deformed images. Each training example is a pair of images: the original image, and a random deformation of that image. No manually labeled ground truth error is required. At test time, only the two registered images are required as input. We train and validate the network on registrations in a set of 2D digital subtraction angiography sequences, such that errors up to eight pixels can be estimated. We show that for this range of errors the convolutional network is able to learn the registration error in pairs of 2D registered images at subpixel precision. Finally, we present a proof of principle for the extension to 3D registration problems in chest CTs, showing that the method has the potential to estimate errors in 3D registration problems.\u3c/p\u3
Error estimation of deformable image registration of pulmonary CT scans using convolutional neural networks
\u3cp\u3eError estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.\u3c/p\u3
A rigidity penalty term for nonrigid registration
Medical images that are to be registered for clinical application often contain both structures that deform and ones that remain rigid. Nonrigid registration algorithms that do not model properties of different tissue types may result in deformations of rigid structures. In this article a local rigidity penalty term is proposed which is included in the registration function in order to penalize the deformation of rigid objects. This term can be used for any representation of the deformation field capable of modelling locally rigid transformations. By using a B-spline representation of the deformation field, a fast algorithm can be devised. The proposed method is compared with an unconstrained nonrigid registration algorithm. It is evaluated on clinical three-dimensional CT follow-up data of the thorax and on two-dimensional DSA image sequences. The results show that nonrigid registration using the proposed rigidity penalty term is capable of nonrigidly aligning images, while keeping user-defined structures locally rigid. © 2007 American Association of Physicists in Medicine
Automatic classification of focal liver lesions based on clinical DCE-MR and T2-weighted images:a feasibility study
Focal liver lesion classification is an important part of diagnostics. In clinical practice, T2-weighted (T2W) and dynamic contrast enhanced (DCE) MR images are used to determine the type of lesion. For automatic liver lesion classification only T2W images are exploited. In this feasibility study, a multi-modal approach for automatic lesion classification of five lesion classes (adenoma, cyst, haemangioma, HCC, and metastasis) is studied. Features are derived from four sets: (A) non-corrected, and (B) motion corrected DCE-MRI, (C) T2W images, and (D) B+C combined, originating from 43 patients. An extremely randomized forest is used as classifier. The results show that motion corrected DCE-MRI features are a valuable addition to the T2W features, and improve the accuracy in discriminating benign and malignant lesions, as well as the classification of the five lesion classes. The multimodal approach shows promising results for an automatic liver lesion classification
Nonrigid registration with tissue-dependent filtering of the deformation field
In present-day medical practice it is often necessary to nonrigidly align image data. Current registration algorithms do not generally take the characteristics of tissue into account. Consequently, rigid tissue, such as bone, can be deformed elastically, growth of tumours may be concealed, and contrast-enhanced structures may be reduced in volume. We propose a method to locally adapt the deformation field at structures that must be kept rigid, using a tissue-dependent filtering technique. This adaptive filtering of the deformation field results in locally linear transformations without scaling or shearing. The degree of filtering is related to tissue stiffness: more filtering is applied at stiff tissue locations, less at parts of the image containing nonrigid tissue. The tissue-dependent filter is incorporated in a commonly used registration algorithm, using mutual information as a similarity measure and cubic B-splines to model the deformation field. The new registration algorithm is compared with this popular method. Evaluation of the proposed tissue-dependent filtering is performed on 3D computed tomography (CT) data of the thorax and on 2D digital subtraction angiography (DSA) images. The results show that tissue-dependent filtering of the deformation field leads to improved registration results: tumour volumes and vessel widths are preserved rather than affected. © 2007 IOP Publishing Ltd
A comparison of acceleration techniques for nonrigid medical image registration
Mutual information based nonrigid registration of medical images is a popular approach. The coordinate mapping that relates the two images is found in an iterative optimisation procedure. In every iteration a computationally expensive evaluation of the mutual information's derivative is required. In this work two acceleration strategies are compared. The first technique aims at reducing the number of iterations, and, consequently, the number of derivative evaluations. The second technique reduces the computational costs per iteration by employing stochastic approximations of the derivatives. The performance of both methods is tested on an artificial registration problem, where the ground truth is known, and on a clinical problem involving low-dose CT scans and large deformations. The experiments show that the stochastic approximation approach is superior in terms of speed and robustness. However, more accurate solutions are obtained with the first technique. © Springer-Verlag Berlin Heidelberg 2006
- …