800 research outputs found
A Spatiotemporal Volumetric Interpolation Network for 4D Dynamic Medical Image
Dynamic medical imaging is usually limited in application due to the large
radiation doses and longer image scanning and reconstruction times. Existing
methods attempt to reduce the dynamic sequence by interpolating the volumes
between the acquired image volumes. However, these methods are limited to
either 2D images and/or are unable to support large variations in the motion
between the image volume sequences. In this paper, we present a spatiotemporal
volumetric interpolation network (SVIN) designed for 4D dynamic medical images.
SVIN introduces dual networks: first is the spatiotemporal motion network that
leverages the 3D convolutional neural network (CNN) for unsupervised parametric
volumetric registration to derive spatiotemporal motion field from two-image
volumes; the second is the sequential volumetric interpolation network, which
uses the derived motion field to interpolate image volumes, together with a new
regression-based module to characterize the periodic motion cycles in
functional organ structures. We also introduce an adaptive multi-scale
architecture to capture the volumetric large anatomy motions. Experimental
results demonstrated that our SVIN outperformed state-of-the-art temporal
medical interpolation methods and natural video interpolation methods that have
been extended to support volumetric images. Our ablation study further
exemplified that our motion network was able to better represent the large
functional motion compared with the state-of-the-art unsupervised medical
registration methods.Comment: 10 pages, 8 figures, Conference on Computer Vision and Pattern
Recognition (CVPR) 202
Dynamic Cone-beam CT Reconstruction using Spatial and Temporal Implicit Neural Representation Learning (STINR)
Objective: Dynamic cone-beam CT (CBCT) imaging is highly desired in
image-guided radiation therapy to provide volumetric images with high spatial
and temporal resolutions to enable applications including tumor motion
tracking/prediction and intra-delivery dose calculation/accumulation. However,
the dynamic CBCT reconstruction is a substantially challenging spatiotemporal
inverse problem, due to the extremely limited projection sample available for
each CBCT reconstruction (one projection for one CBCT volume). Approach: We
developed a simultaneous spatial and temporal implicit neural representation
(STINR) method for dynamic CBCT reconstruction. STINR mapped the unknown image
and the evolution of its motion into spatial and temporal multi-layer
perceptrons (MLPs), and iteratively optimized the neuron weighting of the MLPs
via acquired projections to represent the dynamic CBCT series. In addition to
the MLPs, we also introduced prior knowledge, in form of principal component
analysis (PCA)-based patient-specific motion models, to reduce the complexity
of the temporal INRs to address the ill-conditioned dynamic CBCT reconstruction
problem. We used the extended cardiac torso (XCAT) phantom to simulate
different lung motion/anatomy scenarios to evaluate STINR. The scenarios
contain motion variations including motion baseline shifts, motion
amplitude/frequency variations, and motion non-periodicity. The scenarios also
contain inter-scan anatomical variations including tumor shrinkage and tumor
position change. Main results: STINR shows consistently higher image
reconstruction and motion tracking accuracy than a traditional PCA-based method
and a polynomial-fitting based neural representation method. STINR tracks the
lung tumor to an averaged center-of-mass error of <2 mm, with corresponding
relative errors of reconstructed dynamic CBCTs <10%
Dynamic CBCT Imaging using Prior Model-Free Spatiotemporal Implicit Neural Representation (PMF-STINR)
Dynamic cone-beam computed tomography (CBCT) can capture
high-spatial-resolution, time-varying images for motion monitoring, patient
setup, and adaptive planning of radiotherapy. However, dynamic CBCT
reconstruction is an extremely ill-posed spatiotemporal inverse problem, as
each CBCT volume in the dynamic sequence is only captured by one or a few X-ray
projections. We developed a machine learning-based technique, prior-model-free
spatiotemporal implicit neural representation (PMF-STINR), to reconstruct
dynamic CBCTs from sequentially acquired X-ray projections. PMF-STINR employs a
joint image reconstruction and registration approach to address the
under-sampling challenge. Specifically, PMF-STINR uses spatial implicit neural
representation to reconstruct a reference CBCT volume, and it applies temporal
INR to represent the intra-scan dynamic motion with respect to the reference
CBCT to yield dynamic CBCTs. PMF-STINR couples the temporal INR with a
learning-based B-spline motion model to capture time-varying deformable motion
during the reconstruction. Compared with previous methods, the spatial INR, the
temporal INR, and the B-spline model of PMF-STINR are all learned on the fly
during reconstruction in a one-shot fashion, without using any patient-specific
prior knowledge or motion sorting/binning. PMF-STINR was evaluated via digital
phantom simulations, physical phantom measurements, and a multi-institutional
patient dataset featuring various imaging protocols (half-fan/full-fan, full
sampling/sparse sampling, different energy and mAs settings, etc.). The results
showed that the one-shot learning-based PMF-STINR can accurately and robustly
reconstruct dynamic CBCTs and capture highly irregular motion with high
temporal (~0.1s) resolution and sub-millimeter accuracy. It can be a promising
tool for motion management by offering richer motion information than
traditional 4D-CBCTs
A normative spatiotemporal MRI atlas of the fetal brain for automatic segmentation and analysis of early brain growth.
Longitudinal characterization of early brain growth in-utero has been limited by a number of challenges in fetal imaging, the rapid change in size, shape and volume of the developing brain, and the consequent lack of suitable algorithms for fetal brain image analysis. There is a need for an improved digital brain atlas of the spatiotemporal maturation of the fetal brain extending over the key developmental periods. We have developed an algorithm for construction of an unbiased four-dimensional atlas of the developing fetal brain by integrating symmetric diffeomorphic deformable registration in space with kernel regression in age. We applied this new algorithm to construct a spatiotemporal atlas from MRI of 81 normal fetuses scanned between 19 and 39 weeks of gestation and labeled the structures of the developing brain. We evaluated the use of this atlas and additional individual fetal brain MRI atlases for completely automatic multi-atlas segmentation of fetal brain MRI. The atlas is available online as a reference for anatomy and for registration and segmentation, to aid in connectivity analysis, and for groupwise and longitudinal analysis of early brain growth
TiAVox: Time-aware Attenuation Voxels for Sparse-view 4D DSA Reconstruction
Four-dimensional Digital Subtraction Angiography (4D DSA) plays a critical
role in the diagnosis of many medical diseases, such as Arteriovenous
Malformations (AVM) and Arteriovenous Fistulas (AVF). Despite its significant
application value, the reconstruction of 4D DSA demands numerous views to
effectively model the intricate vessels and radiocontrast flow, thereby
implying a significant radiation dose. To address this high radiation issue, we
propose a Time-aware Attenuation Voxel (TiAVox) approach for sparse-view 4D DSA
reconstruction, which paves the way for high-quality 4D imaging. Additionally,
2D and 3D DSA imaging results can be generated from the reconstructed 4D DSA
images. TiAVox introduces 4D attenuation voxel grids, which reflect attenuation
properties from both spatial and temporal dimensions. It is optimized by
minimizing discrepancies between the rendered images and sparse 2D DSA images.
Without any neural network involved, TiAVox enjoys specific physical
interpretability. The parameters of each learnable voxel represent the
attenuation coefficients. We validated the TiAVox approach on both clinical and
simulated datasets, achieving a 31.23 Peak Signal-to-Noise Ratio (PSNR) for
novel view synthesis using only 30 views on the clinically sourced dataset,
whereas traditional Feldkamp-Davis-Kress methods required 133 views. Similarly,
with merely 10 views from the synthetic dataset, TiAVox yielded a PSNR of 34.32
for novel view synthesis and 41.40 for 3D reconstruction. We also executed
ablation studies to corroborate the essential components of TiAVox. The code
will be publically available.Comment: 10 pages, 8 figure
SRflow: Deep learning based super-resolution of 4D-flow MRI data
Exploiting 4D-flow magnetic resonance imaging (MRI) data to quantify hemodynamics requires an adequate spatio-temporal vector field resolution at a low noise level. To address this challenge, we provide a learned solution to super-resolve in vivo 4D-flow MRI data at a post-processing level. We propose a deep convolutional neural network (CNN) that learns the inter-scale relationship of the velocity vector map and leverages an efficient residual learning scheme to make it computationally feasible. A novel, direction-sensitive, and robust loss function is crucial to learning vector-field data. We present a detailed comparative study between the proposed super-resolution and the conventional cubic B-spline based vector-field super-resolution. Our method improves the peak-velocity to noise ratio of the flow field by 10 and 30% for in vivo cardiovascular and cerebrovascular data, respectively, for 4 × super-resolution over the state-of-the-art cubic B-spline. Significantly, our method offers 10x faster inference over the cubic B-spline. The proposed approach for super-resolution of 4D-flow data would potentially improve the subsequent calculation of hemodynamic quantities
Knowledge-driven deep learning for fast MR imaging: undersampled MR image reconstruction from supervised to un-supervised learning
Deep learning (DL) has emerged as a leading approach in accelerating MR
imaging. It employs deep neural networks to extract knowledge from available
datasets and then applies the trained networks to reconstruct accurate images
from limited measurements. Unlike natural image restoration problems, MR
imaging involves physics-based imaging processes, unique data properties, and
diverse imaging tasks. This domain knowledge needs to be integrated with
data-driven approaches. Our review will introduce the significant challenges
faced by such knowledge-driven DL approaches in the context of fast MR imaging
along with several notable solutions, which include learning neural networks
and addressing different imaging application scenarios. The traits and trends
of these techniques have also been given which have shifted from supervised
learning to semi-supervised learning, and finally, to unsupervised learning
methods. In addition, MR vendors' choices of DL reconstruction have been
provided along with some discussions on open questions and future directions,
which are critical for the reliable imaging systems.Comment: 46 pages, 5figures, 1 tabl
- …