376 research outputs found
Unsupervised Medical Image Translation Using Cycle-MedGAN
Image-to-image translation is a new field in computer vision with multiple
potential applications in the medical domain. However, for supervised image
translation frameworks, co-registered datasets, paired in a pixel-wise sense,
are required. This is often difficult to acquire in realistic medical
scenarios. On the other hand, unsupervised translation frameworks often result
in blurred translated images with unrealistic details. In this work, we propose
a new unsupervised translation framework which is titled Cycle-MedGAN. The
proposed framework utilizes new non-adversarial cycle losses which direct the
framework to minimize the textural and perceptual discrepancies in the
translated images. Qualitative and quantitative comparisons against other
unsupervised translation approaches demonstrate the performance of the proposed
framework for PET-CT translation and MR motion correction.Comment: Submitted to EUSIPCO 2019, 5 page
MedGAN: Medical Image Translation using GANs
Image-to-image translation is considered a new frontier in the field of
medical image analysis, with numerous potential applications. However, a large
portion of recent approaches offers individualized solutions based on
specialized task-specific architectures or require refinement through
non-end-to-end training. In this paper, we propose a new framework, named
MedGAN, for medical image-to-image translation which operates on the image
level in an end-to-end manner. MedGAN builds upon recent advances in the field
of generative adversarial networks (GANs) by merging the adversarial framework
with a new combination of non-adversarial losses. We utilize a discriminator
network as a trainable feature extractor which penalizes the discrepancy
between the translated medical images and the desired modalities. Moreover,
style-transfer losses are utilized to match the textures and fine-structures of
the desired target images to the translated images. Additionally, we present a
new generator architecture, titled CasNet, which enhances the sharpness of the
translated medical outputs through progressive refinement via encoder-decoder
pairs. Without any application-specific modifications, we apply MedGAN on three
different tasks: PET-CT translation, correction of MR motion artefacts and PET
image denoising. Perceptual analysis by radiologists and quantitative
evaluations illustrate that the MedGAN outperforms other existing translation
approaches.Comment: 16 pages, 8 figure
VORTEX: Physics-Driven Data Augmentations Using Consistency Training for Robust Accelerated MRI Reconstruction
Deep neural networks have enabled improved image quality and fast inference
times for various inverse problems, including accelerated magnetic resonance
imaging (MRI) reconstruction. However, such models require a large number of
fully-sampled ground truth datasets, which are difficult to curate, and are
sensitive to distribution drifts. In this work, we propose applying
physics-driven data augmentations for consistency training that leverage our
domain knowledge of the forward MRI data acquisition process and MRI physics to
achieve improved label efficiency and robustness to clinically-relevant
distribution drifts. Our approach, termed VORTEX, (1) demonstrates strong
improvements over supervised baselines with and without data augmentation in
robustness to signal-to-noise ratio change and motion corruption in
data-limited regimes; (2) considerably outperforms state-of-the-art purely
image-based data augmentation techniques and self-supervised reconstruction
methods on both in-distribution and out-of-distribution data; and (3) enables
composing heterogeneous image-based and physics-driven data augmentations. Our
code is available at https://github.com/ad12/meddlr.Comment: Accepted to MIDL 202
Recommended from our members
Deep learning for cardiac image segmentation: A review
Deep learning has become the most widely used approach for cardiac image segmentation in recent years. In this paper, we provide a review of over 100 cardiac image segmentation papers using deep learning, which covers common imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound (US) and major anatomical structures of interest (ventricles, atria and vessels). In addition, a summary of publicly available cardiac image datasets and code repositories are included to provide a base for encouraging reproducible research. Finally, we discuss the challenges and limitations with current deep learning-based approaches (scarcity of labels, model generalizability across different domains, interpretability) and suggest potential directions for future research
Fast-MC-PET: A Novel Deep Learning-aided Motion Correction and Reconstruction Framework for Accelerated PET
Patient motion during PET is inevitable. Its long acquisition time not only
increases the motion and the associated artifacts but also the patient's
discomfort, thus PET acceleration is desirable. However, accelerating PET
acquisition will result in reconstructed images with low SNR, and the image
quality will still be degraded by motion-induced artifacts. Most of the
previous PET motion correction methods are motion type specific that require
motion modeling, thus may fail when multiple types of motion present together.
Also, those methods are customized for standard long acquisition and could not
be directly applied to accelerated PET. To this end, modeling-free universal
motion correction reconstruction for accelerated PET is still highly
under-explored. In this work, we propose a novel deep learning-aided motion
correction and reconstruction framework for accelerated PET, called
Fast-MC-PET. Our framework consists of a universal motion correction (UMC) and
a short-to-long acquisition reconstruction (SL-Reon) module. The UMC enables
modeling-free motion correction by estimating quasi-continuous motion from
ultra-short frame reconstructions and using this information for
motion-compensated reconstruction. Then, the SL-Recon converts the accelerated
UMC image with low counts to a high-quality image with high counts for our
final reconstruction output. Our experimental results on human studies show
that our Fast-MC-PET can enable 7-fold acceleration and use only 2 minutes
acquisition to generate high-quality reconstruction images that
outperform/match previous motion correction reconstruction methods using
standard 15 minutes long acquisition data.Comment: Accepted at Information Processing in Medical Imaging (IPMI 2023
Multi-modality cardiac image computing: a survey
Multi-modality cardiac imaging plays a key role in the management of patients with cardiovascular diseases. It allows a combination of complementary anatomical, morphological and functional information, increases diagnosis accuracy, and improves the efficacy of cardiovascular interventions and clinical outcomes. Fully-automated processing and quantitative analysis of multi-modality cardiac images could have a direct impact on clinical research and evidence-based patient management. However, these require overcoming significant challenges including inter-modality misalignment and finding optimal methods to integrate information from different modalities.
This paper aims to provide a comprehensive review of multi-modality imaging in cardiology, the computing methods, the validation strategies, the related clinical workflows and future perspectives. For the computing methodologies, we have a favored focus on the three tasks, i.e., registration, fusion and segmentation, which generally involve multi-modality imaging data, either combining information from different modalities or transferring information across modalities. The review highlights that multi-modality cardiac imaging data has the potential of wide applicability in the clinic, such as trans-aortic valve implantation guidance, myocardial viability assessment, and catheter ablation therapy and its patient selection. Nevertheless, many challenges remain unsolved, such as missing modality, modality selection, combination of imaging and non-imaging data, and uniform analysis and representation of different modalities. There is also work to do in defining how the well-developed techniques fit in clinical workflows and how much additional and relevant information they introduce. These problems are likely to continue to be an active field of research and the questions to be answered in the future
- …