29 research outputs found

    Mass Transportation for Deformable Image Registration with Application to Lung CT

    No full text
    Computed Tomography (CT) of the lungs play a key role in clinical investigation of thoracic malignancies, as well as having the potential to increase our knowledge about pulmonary diseases including cancer. It enables longitudinal trials to monitor lung disease progression, and to inform assessment of lung damage resulting from radiation therapy. We present a novel deformable image registration method that accommodates changes in the density of lung tissue depending on the amount of air present in the lungs inspiration/expiration state. We investigate the Monge-Kantorovich theory of optimal mass transportation to model the appearance of lung tissue and apply it in a method for registration. To validate the model, we apply our method to an inhale and exhale lung CT data set, and compare it against registration using the sum of squared differences (SSD) as a representative of the most popular similarity measures used in deformable image registration. The results show that the developed registration method has the potential to handle intensity distortions caused by air and tissue compression, and in addition it can provide accurate annotations of the lungs

    D-net: Siamese based network with mutual attention for volume alignment

    No full text
    Alignment of contrast and non contrast-enhanced imaging is essential for quantification of changes in several biomedical applications. In particular, the extraction of cartilage shape from contrast-enhanced Computed Tomography (CT) of tibiae requires accurate alignment of the bone, currently performed manually. Existing deep learning-based methods for alignment require a common template or are limited in rotation range. Therefore, we present a novel network, D-net, to estimate arbitrary rotation and translation between 3D CT scans that additionally does not require a prior template. D-net is an extension to the branched Siamese encoder-decoder structure connected by new mutual, non-local links, which efficiently capture long-range connections of similar features between two branches. The 3D supervised network is trained and validated using preclinical CT scans of mouse tibiae with and without contrast enhancement in cartilage. The presented results show a significant improvement in the estimation of CT alignment, outperforming the current comparable methods

    A level-set approach to joint image segmentation and registration with application to CT lung imaging

    No full text
    Automated analysis of structural imaging such as lung Computed Tomography (CT) plays an increasingly important role in medical imaging applications. Despite significant progress in the development of image registration and segmentation methods, lung registration and segmentation remain a challenging task. In this paper, we present a novel image registration and segmentation approach, for which we develop a new mathematical formulation to jointly segment and register three-dimensional lung CT volumes. The new algorithm is based on a level-set formulation, which merges a classic Chan–Vese segmentation with the active dense displacement field estimation. Combining registration with segmentation has two key advantages: it allows to eliminate the problem of initializing surface based segmentation methods, and to incorporate prior knowledge into the registration in a mathematically justified manner, while remaining computationally attractive. We evaluate our framework on a publicly available lung CT data set to demonstrate the properties of the new formulation. The presented results show the improved accuracy for our joint segmentation and registration algorithm when compared to registration and segmentation performed separately

    BEAN: brain extraction and alignment network for 3D fetal neurosonography

    No full text
    Brain extraction (masking of extra-cranial tissue) and alignment are fundamental first steps of most neuroimage analysis pipelines. The lack of automated solutions for 3D ultrasound (US) has therefore limited its potential as a neuroimaging modality for studying fetal brain development using routinely acquired scans. In this work, we propose a convolutional neural network (CNN) that accurately and consistently aligns and extracts the fetal brain from minimally pre-processed 3D US scans. Our multi-task CNN, Brain Extraction and Alignment Network (BEAN), consists of two independent branches: 1) a fully-convolutional encoder-decoder branch for brain extraction of unaligned scans, and 2) a two-step regression-based branch for similarity alignment of the brain to a common coordinate space. BEAN was tested on 356 fetal head scans spanning the gestational range of 14 to 30 weeks, significantly outperforming all current alternatives for fetal brain extraction and alignment. BEAN achieved state-of-the-art performance for both tasks, with a mean Dice Similarity Coefficient (DSC) of 0.94 for the brain extraction masks, and a mean DSC of 0.93 for the alignment of the target brain masks. The presented experimental results show that brain structures such as the thalamus, choroid plexus, cavum septum pellucidum, and Sylvian fissure, are consistently aligned throughout the dataset and remain clearly visible when the scans are averaged together. The BEAN implementation and related code can be found under www.github.com/felipemoser/kelluwe

    A level-set approach to joint image segmentation and registration with application to CT lung imaging

    No full text
    Automated analysis of structural imaging such as lung Computed Tomography (CT) plays an increasingly important role in medical imaging applications. Despite significant progress in the development of image registration and segmentation methods, lung registration and segmentation remain a challenging task. In this paper, we present a novel image registration and segmentation approach, for which we develop a new mathematical formulation to jointly segment and register three-dimensional lung CT volumes. The new algorithm is based on a level-set formulation, which merges a classic Chan–Vese segmentation with the active dense displacement field estimation. Combining registration with segmentation has two key advantages: it allows to eliminate the problem of initializing surface based segmentation methods, and to incorporate prior knowledge into the registration in a mathematically justified manner, while remaining computationally attractive. We evaluate our framework on a publicly available lung CT data set to demonstrate the properties of the new formulation. The presented results show the improved accuracy for our joint segmentation and registration algorithm when compared to registration and segmentation performed separately

    Multimodal PET/CT tumour segmentation and prediction of progression-free survival using a full-scale UNet with attention

    No full text
    Segmentation of head and neck (H&N) tumours and prediction of patient outcome are crucial for patient’s disease diagnosis and treatment monitoring. Current developments of robust deep learning models are hindered by the lack of large multi-centre, multi-modal data with quality annotations. The MICCAI 2021 HEad and neCK TumOR (HECKTOR) segmentation and outcome prediction challenge creates a platform for comparing segmentation methods of the primary gross target volume on fluoro-deoxyglucose (FDG)-PET and Computed Tomography images and prediction of progression-free survival in H&N oropharyngeal cancer. For the segmentation task, we proposed a new network based on an encoder-decoder architecture with full inter- and intra-skip connections to take advantage of low-level and high-level semantics at full scales. Additionally, we used Conditional Random Fields as a post-processing step to refine the predicted segmentation maps. We trained multiple neural networks for tumor volume segmentation, and these segmentations were ensembled achieving an average Dice Similarity Coefficient of 0.75 in cross-validation, and 0.76 on the challenge testing data set. For prediction of patient progression free survival task, we propose a Cox proportional hazard regression combining clinical, radiomic, and deep learning features. Our survival prediction model achieved a concordance index of 0.82 in cross-validation, and 0.62 on the challenge testing data se

    Supervoxels for graph cuts-based deformable image registration using guided image filtering

    No full text
    We propose combining a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for three-dimensional (3-D) deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to two-dimensional (2-D) applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation combined with graph cuts-based optimization can be applied to 3-D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model "sliding motion." Applying this method to lung image registration results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available computed tomography lung image dataset leads to the observation that our approach compares very favorably with state of the art methods in continuous and discrete image registration, achieving target registration error of 1.16 mm on average per landmark

    Multi-channel groupwise registration to construct an ultrasound-specific fetal brain atlas

    No full text
    In this paper, we describe a method to construct a 3D atlas from fetal brain ultrasound (US) volumes. A multi-channel groupwise Demons registration is proposed to simultaneously register a set of images from a population to a common reference space, thereby representing the population average. Similar to the standard Demons formulation, our approach takes as input an intensity image, but with an additional channel which contains phase-based features extracted from the intensity channel. The proposed multi-channel atlas construction method is evaluated using a groupwise Dice overlap, and is shown to outperform standard (single-channel) groupwise diffeomorphic Demons registration. This method is then used to construct an atlas from US brain volumes collected from a population of 39 healthy fetal subjects at 23 gestational weeks. The resulting atlas manifests high structural overlap, and correspondence between the US-based and an age-matched fetal MRI-based atlas is observed
    corecore