69 research outputs found

    Cone-Beam Computed Tomography and Deformable Registration-Based “Dose of the Day” Calculations for Adaptive Proton Therapy

    Get PDF
    Purpose: The aim of this work was to evaluate the feasibility of cone-beam computed tomography (CBCT) and deformable image registration (DIR)–based ‘‘dose of the day’’ calculations for adaptive proton therapy. Methods: Intensity-modulated radiation therapy (IMRT) and proton therapy plans were designed for 3 head and neck patients that required replanning, and hence had a replan computed tomography (CT). Proton plans were generated for different beam arrangements and optimizations: intensity modulated proton therapy and single-field uniform dose. We used an in-house DIR software implemented at our institution to generate a deformed CT, by warping the planning CT onto the daily CBCT. This CBCT had a similar patient geometry to the replanned CT. Dose distributions on the replanned CT were considered the gold standard for ‘‘dose of the day’’ calculations, and were compared with doses on deformed CT (our method) and directly on the calibrated CBCT and rigidly aligned planning CT (alternative methods) in terms of dose difference (DD), by calculating the percentage of voxels whose DD was smaller than 2% of the prescribed dose (DD2%-pp) and the root mean square of the DD distribution (DDRMS). Results: Using a deformed CT, the DD2%-pp within the CBCT imaging volume was 93.2% 6 0.7% for IMRT, and 87% 6 3% for proton plans. In a region of higher dose gradient, we found that although DD2%-pp was 94.3% 6 0.2% for IMRT, in proton plans, it dropped to 74% 6 4%. A larger number of treatment beams and single-field uniform dose optimization appear to make the proton plans less sensitive to DIR errors. For example, within the treated volume, the DDRMS was reduced from 2.6% 6 0.6% of the prescribed doseto 1.0% 6 1.3% ofthe prescribed dose when using single-field uniform dose optimization. Conclusions: Promising results were found for DIR- and CBCT-based proton dose calculations. Proton dose calculations were, however, more sensitive to registration errors than IMRT doses were, particularly in high dose gradient regions

    A Multi-Channel Uncertainty-Aware Multi-Resolution Network for MR to CT Synthesis

    Get PDF
    Synthesising computed tomography (CT) images from magnetic resonance images (MRI) plays an important role in the field of medical image analysis, both for quantification and diagnostic purposes. Convolutional neural networks (CNNs) have achieved state-of-the-art results in image-to-image translation for brain applications. However, synthesising whole-body images remains largely uncharted territory, involving many challenges, including large image size and limited field of view, complex spatial context, and anatomical differences between images acquired at different times. We propose the use of an uncertainty-aware multi-channel multi-resolution 3D cascade network specifically aiming for whole-body MR to CT synthesis. The Mean Absolute Error on the synthetic CT generated with the MultiResunc network (73.90 HU) is compared to multiple baseline CNNs like 3D U-Net (92.89 HU), HighRes3DNet (89.05 HU) and deep boosted regression (77.58 HU) and shows superior synthesis performance. We ultimately exploit the extrapolation properties of the MultiRes networks on sub-regions of the body

    Longitudinal Image Registration with Temporal-order and Subject-specificity Discrimination

    Get PDF
    Morphological analysis of longitudinal MR images plays a key role in monitoring disease progression for prostate cancer patients, who are placed under an active surveillance program. In this paper, we describe a learning-based image registration algorithm to quantify changes on regions of interest between a pair of images from the same patient, acquired at two different time points. Combining intensity-based similarity and gland segmentation as weak supervision, the population-data-trained registration networks significantly lowered the target registration errors (TREs) on holdout patient data, compared with those before registration and those from an iterative registration algorithm. Furthermore, this work provides a quantitative analysis on several longitudinal-data-sampling strategies and, in turn, we propose a novel regularisation method based on maximum mean discrepancy, between differently-sampled training image pairs. Based on 216 3D MR images from 86 patients, we report a mean TRE of 5.6 mm and show statistically significant differences between the different training data sampling strategies.Comment: Accepted at MICCAI 202

    Uncertainty-aware multi-resolution whole-body MR to CT synthesis

    Get PDF
    Synthesising computed tomography (CT) images from magnetic resonance images (MRI) plays an important role in the field of medical image analysis, both for quantification and diagnostic purposes. Especially for brain applications, convolutional neural networks (CNNs) have proven to be a valuable tool in this image translation task, achieving state-of-the-art results. Full body image synthesis, however, remains largely uncharted territory, bearing many challenges including a limited field of view and large image size, complex spatial context and anatomical differences between time-elapsing image acquisitions. We propose a novel multi-resolution cascade 3D network for end-to-end full-body MR to CT synthesis. We show that our method outperforms popular CNNs like U-Net in 2D and 3D. We further propose to include uncertainty in our network as a measure of safety and to account for intrinsic noise and misalignment in the data

    Diffusion tensor driven image registration: a deep learning approach

    Full text link
    Tracking microsctructural changes in the developing brain relies on accurate inter-subject image registration. However, most methods rely on either structural or diffusion data to learn the spatial correspondences between two or more images, without taking into account the complementary information provided by using both. Here we propose a deep learning registration framework which combines the structural information provided by T2-weighted (T2w) images with the rich microstructural information offered by diffusion tensor imaging (DTI) scans. We perform a leave-one-out cross-validation study where we compare the performance of our multi-modality registration model with a baseline model trained on structural data only, in terms of Dice scores and differences in fractional anisotropy (FA) maps. Our results show that in terms of average Dice scores our model performs better in subcortical regions when compared to using structural data only. Moreover, average sum-of-squared differences between warped and fixed FA maps show that our proposed model performs better at aligning the diffusion data

    Areas of normal pulmonary parenchyma on HRCT exhibit increased FDG PET signal in IPF patients

    Get PDF
    Purpose: Patients with idiopathic pulmonary fibrosis (IPF) show increased PET signal at sites of morphological abnormality on high-resolution computed tomography (HRCT). The purpose of this investigation was to investigate the PET signal at sites of normal-appearing lung on HRCT in IPF. Methods: Consecutive IPF patients (22 men, 3 women) were prospectively recruited. The patients underwent 18F-FDG PET/HRCT. The pulmonary imaging findings in the IPF patients were compared to the findings in a control population. Pulmonary uptake of 18F-FDG (mean SUV) was quantified at sites of morphologically normal parenchyma on HRCT. SUVs were also corrected for tissue fraction (TF). The mean SUV in IPF patients was compared with that in 25 controls (patients with lymphoma in remission or suspected paraneoplastic syndrome with normal PET/CT appearances). Results: The pulmonary SUV (mean ± SD) uncorrected for TF in the controls was 0.48 ± 0.14 and 0.78 ± 0.24 taken from normal lung regions in IPF patients (p < 0.001). The TF-corrected mean SUV in the controls was 2.24 ± 0.29 and 3.24 ± 0.84 in IPF patients (p < 0.001). Conclusion: IPF patients have increased pulmonary uptake of 18F-FDG on PET in areas of lung with a normal morphological appearance on HRCT. This may have implications for determining disease mechanisms and treatment monitoring. © 2013 The Author(s)

    NiftySim: A GPU-based nonlinear finite element package for simulation of soft tissue biomechanics

    Get PDF
    Purpose NiftySim, an open-source finite element toolkit, has been designed to allow incorporation of high-performance soft tissue simulation capabilities into biomedical applications. The toolkit provides the option of execution on fast graphics processing unit (GPU) hardware, numerous constitutive models and solid-element options, membrane and shell elements, and contact modelling facilities, in a simple to use library. Methods The toolkit is founded on the total Lagrangian explicit dynamics (TLEDs) algorithm, which has been shown to be efficient and accurate for simulation of soft tissues. The base code is written in C ++++ , and GPU execution is achieved using the nVidia CUDA framework. In most cases, interaction with the underlying solvers can be achieved through a single Simulator class, which may be embedded directly in third-party applications such as, surgical guidance systems. Advanced capabilities such as contact modelling and nonlinear constitutive models are also provided, as are more experimental technologies like reduced order modelling. A consistent description of the underlying solution algorithm, its implementation with a focus on GPU execution, and examples of the toolkit’s usage in biomedical applications are provided. Results Efficient mapping of the TLED algorithm to parallel hardware results in very high computational performance, far exceeding that available in commercial packages. Conclusion The NiftySim toolkit provides high-performance soft tissue simulation capabilities using GPU technology for biomechanical simulation research applications in medical image computing, surgical simulation, and surgical guidance applications

    Symptom clusters in COVID-19: A potential clinical prediction tool from the COVID Symptom Study app

    Get PDF
    As no one symptom can predict disease severity or the need for dedicated medical support in coronavirus disease 2019 (COVID-19), we asked whether documenting symptom time series over the first few days informs outcome. Unsupervised time series clustering over symptom presentation was performed on data collected from a training dataset of completed cases enlisted early from the COVID Symptom Study Smartphone application, yielding six distinct symptom presentations. Clustering was validated on an independent replication dataset between 1 and 28 May 2020. Using the first 5 days of symptom logging, the ROC-AUC (receiver operating characteristic - area under the curve) of need for respiratory support was 78.8%, substantially outperforming personal characteristics alone (ROC-AUC 69.5%). Such an approach could be used to monitor at-risk patients and predict medical resource requirements days before they are required

    SEGMA: an automatic SEGMentation Approach for human brain MRI using sliding window and random forests

    Get PDF
    Quantitative volumes from brain magnetic resonance imaging (MRI) acquired across the life course may be useful for investigating long term effects of risk and resilience factors for brain development and healthy aging, and for understanding early life determinants of adult brain structure. Therefore, there is an increasing need for automated segmentation tools that can be applied to images acquired at different life stages. We developed an automatic segmentation method for human brain MRI, where a sliding window approach and a multi-class random forest classifier were applied to high-dimensional feature vectors for accurate segmentation. The method performed well on brain MRI data acquired from 179 individuals, analyzed in three age groups: newborns (38–42 weeks gestational age), children and adolescents (4–17 years) and adults (35–71 years). As the method can learn from partially labeled datasets, it can be used to segment large-scale datasets efficiently. It could also be applied to different populations and imaging modalities across the life course

    The importance of group-wise registration in tract based spatial statistics study of neurodegeneration: a simulation study in Alzheimer's disease.

    Get PDF
    Tract-based spatial statistics (TBSS) is a popular method for the analysis of diffusion tensor imaging data. TBSS focuses on differences in white matter voxels with high fractional anisotropy (FA), representing the major fibre tracts, through registering all subjects to a common reference and the creation of a FA skeleton. This work considers the effect of choice of reference in the TBSS pipeline, which can be a standard template, an individual subject from the study, a study-specific template or a group-wise average. While TBSS attempts to overcome registration error by searching the neighbourhood perpendicular to the FA skeleton for the voxel with maximum FA, this projection step may not compensate for large registration errors that might occur in the presence of pathology such as atrophy in neurodegenerative diseases. This makes registration performance and choice of reference an important issue. Substantial work in the field of computational anatomy has shown the use of group-wise averages to reduce biases while avoiding the arbitrary selection of a single individual. Here, we demonstrate the impact of the choice of reference on: (a) specificity (b) sensitivity in a simulation study and (c) a real-world comparison of Alzheimer's disease patients to controls. In (a) and (b), simulated deformations and decreases in FA were applied to control subjects to simulate changes of shape and WM integrity similar to what would be seen in AD patients, in order to provide a "ground truth" for evaluating the various methods of TBSS reference. Using a group-wise average atlas as the reference outperformed other references in the TBSS pipeline in all evaluations
    corecore