75 research outputs found
Retrospective correction of Rigid and Non-Rigid MR motion artifacts using GANs
Motion artifacts are a primary source of magnetic resonance (MR) image
quality deterioration with strong repercussions on diagnostic performance.
Currently, MR motion correction is carried out either prospectively, with the
help of motion tracking systems, or retrospectively by mainly utilizing
computationally expensive iterative algorithms. In this paper, we utilize a new
adversarial framework, titled MedGAN, for the joint retrospective correction of
rigid and non-rigid motion artifacts in different body regions and without the
need for a reference image. MedGAN utilizes a unique combination of
non-adversarial losses and a new generator architecture to capture the textures
and fine-detailed structures of the desired artifact-free MR images.
Quantitative and qualitative comparisons with other adversarial techniques have
illustrated the proposed model performance.Comment: 5 pages, 2 figures, under review for the IEEE International Symposium
for Biomedical Image
MedGAN: Medical Image Translation using GANs
Image-to-image translation is considered a new frontier in the field of
medical image analysis, with numerous potential applications. However, a large
portion of recent approaches offers individualized solutions based on
specialized task-specific architectures or require refinement through
non-end-to-end training. In this paper, we propose a new framework, named
MedGAN, for medical image-to-image translation which operates on the image
level in an end-to-end manner. MedGAN builds upon recent advances in the field
of generative adversarial networks (GANs) by merging the adversarial framework
with a new combination of non-adversarial losses. We utilize a discriminator
network as a trainable feature extractor which penalizes the discrepancy
between the translated medical images and the desired modalities. Moreover,
style-transfer losses are utilized to match the textures and fine-structures of
the desired target images to the translated images. Additionally, we present a
new generator architecture, titled CasNet, which enhances the sharpness of the
translated medical outputs through progressive refinement via encoder-decoder
pairs. Without any application-specific modifications, we apply MedGAN on three
different tasks: PET-CT translation, correction of MR motion artefacts and PET
image denoising. Perceptual analysis by radiologists and quantitative
evaluations illustrate that the MedGAN outperforms other existing translation
approaches.Comment: 16 pages, 8 figure
Mathematical modeling of diurnal patterns of carbon allocation to shoot and root in Arabidopsis thaliana
We developed a mathematical model to simulate dynamics of central carbon metabolism over complete diurnal cycles for leaves of Arabidopsis thaliana exposed to either normal (120 µmol⋅m-2⋅s-1) or high light intensities (1200 µmol⋅m-2⋅s-1). The main objective was to obtain a high-resolution time series for metabolite dynamics as well as for shoot structural carbon formation (compounds with long residence time) and assimilate export of aerial organs to the sink tissue. Model development comprised a stepwise increment of complexity to finally approach the in vivo situation. The correct allocation of assimilates to either sink export or shoot structural carbon formation was a central goal of model development. Diurnal gain of structural carbon was calculated based on the daily increment in total photosynthetic carbon fixation, and this was the only parameter for structural carbon formation implemented in the model. Simulations of the dynamics of central metabolite pools revealed that shoot structural carbon formation occurred solely during the light phase but not during the night. The model allowed simulation of shoot structural carbon formation as a function of central leaf carbon metabolism under different environmental conditions without structural modifications. Model simulations were performed for the accession Landsberg erecta (Ler) and its hexokinase null-mutant gin2-1. This mutant displays a slow growth phenotype especially at increasing light intensities. Comparison of simulations revealed that the retarded shoot growth in the mutant resulted from an increased assimilate transport to sink organs. Due to its central function in sucrose cycling and sugar signaling, our findings suggest an important role of hexokinase-1 for carbon allocation to either shoot growth or assimilate export
Resolving subcellular plant metabolism
Plant cells are characterized by a high degree of compartmentalization and a diverse proteome and metabolome. Only a very limited number of studies has addressed combined subcellular proteomics and metabolomics which strongly limits biochemical and physiological interpretation of large‐scale ’omics data. Our study presents a methodological combination of nonaqueous fractionation, shotgun proteomics, enzyme activities and metabolomics to reveal subcellular diurnal dynamics of plant metabolism. Subcellular marker protein sets were identified and enzymatically validated to resolve metabolism in a four‐compartment model comprising chloroplasts, cytosol, vacuole and mitochondria. These marker sets are now available for future studies that aim to monitor subcellular metabolome and proteome dynamics. Comparing subcellular dynamics in wild type plants and HXK1‐deficient gin2‐1 mutants revealed a strong impact of HXK1 activity on metabolome dynamics in multiple compartments. Glucose accumulation in the cytosol of gin2‐1 was accompanied by diminished vacuolar glucose levels. Subcellular dynamics of pyruvate, succinate and fumarate amounts were significantly affected in gin2‐1 and coincided with differential mitochondrial proteome dynamics. Lowered mitochondrial glycine and serine amounts in gin2‐1 together with reduced abundance of photorespiratory proteins indicated an effect of the gin2‐1 mutation on photorespiratory capacity. Our findings highlight the necessity to resolve plant metabolism to a subcellular level to provide a causal relationship between metabolites, proteins and metabolic pathway regulation
Global k-Space Interpolation for Dynamic MRI Reconstruction using Masked Image Modeling
In dynamic Magnetic Resonance Imaging (MRI), k-space is typically
undersampled due to limited scan time, resulting in aliasing artifacts in the
image domain. Hence, dynamic MR reconstruction requires not only modeling
spatial frequency components in the x and y directions of k-space but also
considering temporal redundancy. Most previous works rely on image-domain
regularizers (priors) to conduct MR reconstruction. In contrast, we focus on
interpolating the undersampled k-space before obtaining images with Fourier
transform. In this work, we connect masked image modeling with k-space
interpolation and propose a novel Transformer-based k-space Global
Interpolation Network, termed k-GIN. Our k-GIN learns global dependencies among
low- and high-frequency components of 2D+t k-space and uses it to interpolate
unsampled data. Further, we propose a novel k-space Iterative Refinement Module
(k-IRM) to enhance the high-frequency components learning. We evaluate our
approach on 92 in-house 2D+t cardiac MR subjects and compare it to MR
reconstruction methods with image-domain regularizers. Experiments show that
our proposed k-space interpolation method quantitatively and qualitatively
outperforms baseline methods. Importantly, the proposed approach achieves
substantially higher robustness and generalizability in cases of
highly-undersampled MR data
Complementary Time-Frequency Domain Networks for Dynamic Parallel MR Image Reconstruction
Purpose: To introduce a novel deep learning based approach for fast and
high-quality dynamic multi-coil MR reconstruction by learning a complementary
time-frequency domain network that exploits spatio-temporal correlations
simultaneously from complementary domains.
Theory and Methods: Dynamic parallel MR image reconstruction is formulated as
a multi-variable minimisation problem, where the data is regularised in
combined temporal Fourier and spatial (x-f) domain as well as in
spatio-temporal image (x-t) domain. An iterative algorithm based on variable
splitting technique is derived, which alternates among signal de-aliasing steps
in x-f and x-t spaces, a closed-form point-wise data consistency step and a
weighted coupling step. The iterative model is embedded into a deep recurrent
neural network which learns to recover the image via exploiting spatio-temporal
redundancies in complementary domains.
Results: Experiments were performed on two datasets of highly undersampled
multi-coil short-axis cardiac cine MRI scans. Results demonstrate that our
proposed method outperforms the current state-of-the-art approaches both
quantitatively and qualitatively. The proposed model can also generalise well
to data acquired from a different scanner and data with pathologies that were
not seen in the training set.
Conclusion: The work shows the benefit of reconstructing dynamic parallel MRI
in complementary time-frequency domains with deep neural networks. The method
can effectively and robustly reconstruct high-quality images from highly
undersampled dynamic multi-coil data ( and yielding 15s
and 10s scan times respectively) with fast reconstruction speed (2.8s). This
could potentially facilitate achieving fast single-breath-hold clinical 2D
cardiac cine imaging.Comment: Accepted by Magnetic Resonance in Medicin
MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision
Prior to the deep learning era, shape was commonly used to describe the
objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are
predominantly diverging from computer vision, where voxel grids, meshes, point
clouds, and implicit surface models are used. This is seen from numerous
shape-related publications in premier vision conferences as well as the growing
popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915
models). For the medical domain, we present a large collection of anatomical
shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument,
called MedShapeNet, created to facilitate the translation of data-driven vision
algorithms to medical applications and to adapt SOTA vision algorithms to
medical problems. As a unique feature, we directly model the majority of shapes
on the imaging data of real patients. As of today, MedShapeNet includes 23
dataset with more than 100,000 shapes that are paired with annotations (ground
truth). Our data is freely accessible via a web interface and a Python
application programming interface (API) and can be used for discriminative,
reconstructive, and variational benchmarks as well as various applications in
virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present
use cases in the fields of classification of brain tumors, facial and skull
reconstructions, multi-class anatomy completion, education, and 3D printing. In
future, we will extend the data and improve the interfaces. The project pages
are: https://medshapenet.ikim.nrw/ and
https://github.com/Jianningli/medshapenet-feedbackComment: 16 page
- …