1,013 research outputs found
Deep Boosted Regression for MR to CT Synthesis
Attenuation correction is an essential requirement of positron emission
tomography (PET) image reconstruction to allow for accurate quantification.
However, attenuation correction is particularly challenging for PET-MRI as
neither PET nor magnetic resonance imaging (MRI) can directly image tissue
attenuation properties. MRI-based computed tomography (CT) synthesis has been
proposed as an alternative to physics based and segmentation-based approaches
that assign a population-based tissue density value in order to generate an
attenuation map. We propose a novel deep fully convolutional neural network
that generates synthetic CTs in a recursive manner by gradually reducing the
residuals of the previous network, increasing the overall accuracy and
generalisability, while keeping the number of trainable parameters within
reasonable limits. The model is trained on a database of 20 pre-acquired MRI/CT
pairs and a four-fold random bootstrapped validation with a 80:20 split is
performed. Quantitative results show that the proposed framework outperforms a
state-of-the-art atlas-based approach decreasing the Mean Absolute Error (MAE)
from 131HU to 68HU for the synthetic CTs and reducing the PET reconstruction
error from 14.3% to 7.2%.Comment: Accepted at SASHIMI201
Fast pseudo-CT synthesis from MRI T1-weighted images using a patch-based approach
MRI-based bone segmentation is a challenging task because bone tissue and air both present low signal intensity on MR images, making it difficult to accurately delimit the bone boundaries. However, estimating bone from MRI images may allow decreasing patient ionization by removing the need of patient-specific CT acquisition in several applications. In this work, we propose a fast GPU-based pseudo-CT generation from a patient-specific MRI T1-weighted image using a group-wise patch-based approach and a limited MRI and CT atlas dictionary. For every voxel in the input MR image, we compute the similarity of the patch containing that voxel with the patches of all MR images in the database, which lie in a certain anatomical neighborhood. The pseudo-CT is obtained as a local weighted linear combination of the CT values of the corresponding patches. The algorithm was implemented in a GPU. The use of patch-based techniques allows a fast and accurate estimation of the pseudo-CT from MR T1-weighted images, with a similar accuracy as the patient-specific CT. The experimental normalized cross correlation reaches 0.9324±0.0048 for an atlas with 10 datasets. The high NCC values indicate how our method can accurately approximate the patient-specific CT. The GPU implementation led to a substantial decrease in computational time making the approach suitable for real applications
NiftyNet: a deep-learning platform for medical imaging
Medical image analysis and computer-assisted intervention problems are
increasingly being addressed with deep-learning-based solutions. Established
deep-learning platforms are flexible but do not provide specific functionality
for medical image analysis and adapting them for this application requires
substantial implementation effort. Thus, there has been substantial duplication
of effort and incompatible infrastructure developed across many research
groups. This work presents the open-source NiftyNet platform for deep learning
in medical imaging. The ambition of NiftyNet is to accelerate and simplify the
development of these solutions, and to provide a common mechanism for
disseminating research outputs for the community to use, adapt and build upon.
NiftyNet provides a modular deep-learning pipeline for a range of medical
imaging applications including segmentation, regression, image generation and
representation learning applications. Components of the NiftyNet pipeline
including data loading, data augmentation, network architectures, loss
functions and evaluation metrics are tailored to, and take advantage of, the
idiosyncracies of medical image analysis and computer-assisted intervention.
NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D
and 3D images and computational graphs by default.
We present 3 illustrative medical image analysis applications built using
NiftyNet: (1) segmentation of multiple abdominal organs from computed
tomography; (2) image regression to predict computed tomography attenuation
maps from brain magnetic resonance images; and (3) generation of simulated
ultrasound images for specified anatomical poses.
NiftyNet enables researchers to rapidly develop and distribute deep learning
solutions for segmentation, regression, image generation and representation
learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge
Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6
figures; Update includes additional applications, updated author list and
formatting for journal submissio
Four-dimensional Cone Beam CT Reconstruction and Enhancement using a Temporal Non-Local Means Method
Four-dimensional Cone Beam Computed Tomography (4D-CBCT) has been developed
to provide respiratory phase resolved volumetric imaging in image guided
radiation therapy (IGRT). Inadequate number of projections in each phase bin
results in low quality 4D-CBCT images with obvious streaking artifacts. In this
work, we propose two novel 4D-CBCT algorithms: an iterative reconstruction
algorithm and an enhancement algorithm, utilizing a temporal nonlocal means
(TNLM) method. We define a TNLM energy term for a given set of 4D-CBCT images.
Minimization of this term favors those 4D-CBCT images such that any anatomical
features at one spatial point at one phase can be found in a nearby spatial
point at neighboring phases. 4D-CBCT reconstruction is achieved by minimizing a
total energy containing a data fidelity term and the TNLM energy term. As for
the image enhancement, 4D-CBCT images generated by the FDK algorithm are
enhanced by minimizing the TNLM function while keeping the enhanced images
close to the FDK results. A forward-backward splitting algorithm and a
Gauss-Jacobi iteration method are employed to solve the problems. The
algorithms are implemented on GPU to achieve a high computational efficiency.
The reconstruction algorithm and the enhancement algorithm generate visually
similar 4D-CBCT images, both better than the FDK results. Quantitative
evaluations indicate that, compared with the FDK results, our reconstruction
method improves contrast-to-noise-ratio (CNR) by a factor of 2.56~3.13 and our
enhancement method increases the CNR by 2.75~3.33 times. The enhancement method
also removes over 80% of the streak artifacts from the FDK results. The total
computation time is ~460 sec for the reconstruction algorithm and ~610 sec for
the enhancement algorithm on an NVIDIA Tesla C1060 GPU card.Comment: 20 pages, 3 figures, 2 table
- …