80 research outputs found
Interactive Multigrid Refinement for Deformable Image Registration
Deformable image registration is the spatial mapping of corresponding locations between images and can be used for important applications in radiotherapy. Although numerous methods have attempted to register deformable medical images automatically, such as salient-feature-based registration (SFBR), free-form deformation (FFD), and demons, no automatic method for registration is perfect, and no generic automatic algorithm has shown to work properly for clinical applications due to the fact that the deformation field is often complex and cannot be estimated well by current automatic deformable registration methods. This paper focuses on how to revise registration results interactively for deformable image registration. We can manually revise the transformed image locally in a hierarchical multigrid manner to make the transformed image register well with the reference image. The proposed method is based on multilevel B-spline to interactively revise the deformable transformation in the overlapping region between the reference image and the transformed image. The resulting deformation controls the shape of the transformed image and produces a nice registration or improves the registration results of other registration methods. Experimental results in clinical medical images for adaptive radiotherapy demonstrated the effectiveness of the proposed method
Unsupervised CT Metal Artifact Reduction by Plugging Diffusion Priors in Dual Domains
During the process of computed tomography (CT), metallic implants often cause
disruptive artifacts in the reconstructed images, impeding accurate diagnosis.
Several supervised deep learning-based approaches have been proposed for
reducing metal artifacts (MAR). However, these methods heavily rely on training
with simulated data, as obtaining paired metal artifact CT and clean CT data in
clinical settings is challenging. This limitation can lead to decreased
performance when applying these methods in clinical practice. Existing
unsupervised MAR methods, whether based on learning or not, typically operate
within a single domain, either in the image domain or the sinogram domain. In
this paper, we propose an unsupervised MAR method based on the diffusion model,
a generative model with a high capacity to represent data distributions.
Specifically, we first train a diffusion model using CT images without metal
artifacts. Subsequently, we iteratively utilize the priors embedded within the
pre-trained diffusion model in both the sinogram and image domains to restore
the degraded portions caused by metal artifacts. This dual-domain processing
empowers our approach to outperform existing unsupervised MAR methods,
including another MAR method based on the diffusion model, which we have
qualitatively and quantitatively validated using synthetic datasets. Moreover,
our method demonstrates superior visual results compared to both supervised and
unsupervised methods on clinical datasets
An edge-directed interpolation method for fetal spine MR images
Abstract
Background
Fetal spinal magnetic resonance imaging (MRI) is a prenatal routine for proper assessment of fetus development, especially when suspected spinal malformations occur while ultrasound fails to provide details. Limited by hardware, fetal spine MR images suffer from its low resolution.
High-resolution MR images can directly enhance readability and improve diagnosis accuracy. Image interpolation for higher resolution is required in clinical situations, while many methods fail to preserve edge structures. Edge carries heavy structural messages of objects in visual scenes for doctors to detect suspicions, classify malformations and make correct diagnosis. Effective interpolation with well-preserved edge structures is still challenging.
Method
In this paper, we propose an edge-directed interpolation (EDI) method and apply it on a group of fetal spine MR images to evaluate its feasibility and performance. This method takes edge messages from Canny edge detector to guide further pixel modification. First, low-resolution (LR) images of fetal spine are interpolated into high-resolution (HR) images with targeted factor by bi-linear method. Then edge information from LR and HR images is put into a twofold strategy to sharpen or soften edge structures. Finally a HR image with well-preserved edge structures is generated. The HR images obtained from proposed method are validated and compared with that from other four EDI methods. Performances are evaluated from six metrics, and subjective analysis of visual quality is based on regions of interest (ROI).
Results
All these five EDI methods are able to generate HR images with enriched details. From quantitative analysis of six metrics, the proposed method outperforms the other four from signal-to-noise ratio (SNR), peak signal-to-noise ratio (PSNR), structure similarity index (SSIM), feature similarity index (FSIM) and mutual information (MI) with seconds-level time consumptions (TC). Visual analysis of ROI shows that the proposed method maintains better consistency in edge structures with the original images.
Conclusions
The proposed method classifies edge orientations into four categories and well preserves structures. It generates convincing HR images with fine details and is suitable in real-time situations. Iterative curvature-based interpolation (ICBI) method may result in crisper edges, while the other three methods are sensitive to noise and artifacts
Diffusion Probabilistic Priors for Zero-Shot Low-Dose CT Image Denoising
Denoising low-dose computed tomography (CT) images is a critical task in
medical image computing. Supervised deep learning-based approaches have made
significant advancements in this area in recent years. However, these methods
typically require pairs of low-dose and normal-dose CT images for training,
which are challenging to obtain in clinical settings. Existing unsupervised
deep learning-based methods often require training with a large number of
low-dose CT images or rely on specially designed data acquisition processes to
obtain training data. To address these limitations, we propose a novel
unsupervised method that only utilizes normal-dose CT images during training,
enabling zero-shot denoising of low-dose CT images. Our method leverages the
diffusion model, a powerful generative model. We begin by training a cascaded
unconditional diffusion model capable of generating high-quality normal-dose CT
images from low-resolution to high-resolution. The cascaded architecture makes
the training of high-resolution diffusion models more feasible. Subsequently,
we introduce low-dose CT images into the reverse process of the diffusion model
as likelihood, combined with the priors provided by the diffusion model and
iteratively solve multiple maximum a posteriori (MAP) problems to achieve
denoising. Additionally, we propose methods to adaptively adjust the
coefficients that balance the likelihood and prior in MAP estimations, allowing
for adaptation to different noise levels in low-dose CT images. We test our
method on low-dose CT datasets of different regions with varying dose levels.
The results demonstrate that our method outperforms the state-of-the-art
unsupervised method and surpasses several supervised deep learning-based
methods. Codes are available in https://github.com/DeepXuan/Dn-Dp
Three-Dimensional Medical Image Fusion with Deformable Cross-Attention
Multimodal medical image fusion plays an instrumental role in several areas
of medical image processing, particularly in disease recognition and tumor
detection. Traditional fusion methods tend to process each modality
independently before combining the features and reconstructing the fusion
image. However, this approach often neglects the fundamental commonalities and
disparities between multimodal information. Furthermore, the prevailing
methodologies are largely confined to fusing two-dimensional (2D) medical image
slices, leading to a lack of contextual supervision in the fusion images and
subsequently, a decreased information yield for physicians relative to
three-dimensional (3D) images. In this study, we introduce an innovative
unsupervised feature mutual learning fusion network designed to rectify these
limitations. Our approach incorporates a Deformable Cross Feature Blend (DCFB)
module that facilitates the dual modalities in discerning their respective
similarities and differences. We have applied our model to the fusion of 3D MRI
and PET images obtained from 660 patients in the Alzheimer's Disease
Neuroimaging Initiative (ADNI) dataset. Through the application of the DCFB
module, our network generates high-quality MRI-PET fusion images. Experimental
results demonstrate that our method surpasses traditional 2D image fusion
methods in performance metrics such as Peak Signal to Noise Ratio (PSNR) and
Structural Similarity Index Measure (SSIM). Importantly, the capacity of our
method to fuse 3D images enhances the information available to physicians and
researchers, thus marking a significant step forward in the field. The code
will soon be available online
A Matlab Toolbox for Feature Importance Ranking
More attention is being paid for feature importance ranking (FIR), in
particular when thousands of features can be extracted for intelligent
diagnosis and personalized medicine. A large number of FIR approaches have been
proposed, while few are integrated for comparison and real-life applications.
In this study, a matlab toolbox is presented and a total of 30 algorithms are
collected. Moreover, the toolbox is evaluated on a database of 163 ultrasound
images. To each breast mass lesion, 15 features are extracted. To figure out
the optimal subset of features for classification, all combinations of features
are tested and linear support vector machine is used for the malignancy
prediction of lesions annotated in ultrasound images. At last, the
effectiveness of FIR is analyzed according to performance comparison. The
toolbox is online (https://github.com/NicoYuCN/matFIR). In our future work,
more FIR methods, feature selection methods and machine learning classifiers
will be integrated
- …