15 research outputs found

    Hemodynamic Effects of Entry and Exit Tear Size in Aortic Dissection Evaluated with In Vitro Magnetic Resonance Imaging and Fluid-Structure Interaction Simulation

    Full text link
    Understanding the complex interplay between morphologic and hemodynamic features in aortic dissection is critical for risk stratification and for the development of individualized therapy. This work evaluates the effects of entry and exit tear size on the hemodynamics in type B aortic dissection by comparing fluid-structure interaction (FSI) simulations with in vitro 4D-flow magnetic resonance imaging (MRI). A baseline patient-specific 3D-printed model and two variants with modified tear size (smaller entry tear, smaller exit tear) were embedded into a flow- and pressure-controlled setup to perform MRI as well as 12-point catheter-based pressure measurements. The same models defined the wall and fluid domains for FSI simulations, for which boundary conditions were matched with measured data. Results showed exceptionally well matched complex flow patterns between 4D-flow MRI and FSI simulations. Compared to the baseline model, false lumen flow volume decreased with either a smaller entry tear (-17.8 and -18.5 %, for FSI simulation and 4D-flow MRI, respectively) or smaller exit tear (-16.0 and -17.3 %). True to false lumen pressure difference (initially 11.0 and 7.9 mmHg, for FSI simulation and catheter-based pressure measurements, respectively) increased with a smaller entry tear (28.9 and 14.6 mmHg), and became negative with a smaller exit tear (-20.6 and -13.2 mmHg). This work establishes quantitative and qualitative effects of entry or exit tear size on hemodynamics in aortic dissection, with particularly notable impact observed on FL pressurization. FSI simulations demonstrate acceptable qualitative and quantitative agreement with flow imaging, supporting its deployment in clinical studies.Comment: Judith Zimmermann and Kathrin B\"aumler contributed equall

    CG-SENSE revisited: Results from the first ISMRM reproducibility challenge

    Get PDF
    Purpose: The aim of this work is to shed light on the issue of reproducibility in MR image reconstruction in the context of a challenge. Participants had to recreate the results of "Advances in sensitivity encoding with arbitrary k-space trajectories" by Pruessmann et al. Methods: The task of the challenge was to reconstruct radially acquired multi-coil k-space data (brain/heart) following the method in the original paper, reproducing its key figures. Results were compared to consolidated reference implementations created after the challenge, accounting for the two most common programming languages used in the submissions (Matlab/Python). Results: Visually, differences between submissions were small. Pixel-wise differences originated from image orientation, assumed field-of-view or resolution. The reference implementations were in good agreement, both visually and in terms of image similarity metrics. Discussion and Conclusion: While the description level of the published algorithm enabled participants to reproduce CG-SENSE in general, details of the implementation varied, e.g., density compensation or Tikhonov regularization. Implicit assumptions about the data lead to further differences, emphasizing the importance of sufficient meta-data accompanying open data sets. Defining reproducibility quantitatively turned out to be non-trivial for this image reconstruction challenge, in the absence of ground-truth results. Typical similarity measures like NMSE of SSIM were misled by image intensity scaling and outlier pixels. Thus, to facilitate reproducibility, researchers are encouraged to publish code and data alongside the original paper. Future methodological papers on MR image reconstruction might benefit from the consolidated reference implementations of CG-SENSE presented here, as a benchmark for methods comparison.Comment: Submitted to Magnetic Resonance in Medicine; 29 pages with 10 figures and 1 tabl

    Myocardial Segmentation of Tagged Magnetic Resonance Images with Transfer Learning Using Generative Cine-To-Tagged Dataset Transformation

    No full text
    The use of deep learning (DL) segmentation in cardiac MRI has the potential to streamline the radiology workflow, particularly for the measurement of myocardial strain. Recent efforts in DL motion tracking models have drastically reduced the time needed to measure the heart’s displacement field and the subsequent myocardial strain estimation. However, the selection of initial myocardial reference points is not automated and still requires manual input from domain experts. Segmentation of the myocardium is a key step for initializing reference points. While high-performing myocardial segmentation models exist for cine images, this is not the case for tagged images. In this work, we developed and compared two novel DL models (nnU-net and Segmentation ResNet VAE) for the segmentation of myocardium from tagged CMR images. We implemented two methods to transform cardiac cine images into tagged images, allowing us to leverage large public annotated cine datasets. The cine-to-tagged methods included (i) a novel physics-driven transformation model, and (ii) a generative adversarial network (GAN) style transfer model. We show that pretrained models perform better (+2.8 Dice coefficient percentage points) and converge faster (6×) than models trained from scratch. The best-performing method relies on a pretraining with an unpaired, unlabeled, and structure-preserving generative model trained to transform cine images into their tagged-appearing equivalents. Our state-of-the-art myocardium segmentation network reached a Dice coefficient of 0.828 and 95th percentile Hausdorff distance of 4.745 mm on a held-out test set. This performance is comparable to existing state-of-the-art segmentation networks for cine images

    A machine learning approach to predict cellular uptake of pBAE polyplexes

    No full text
    The delivery of genetic material (DNA and RNA) to cells can cure a wide range of diseases but is limited by the delivery efficiency of the carrier system. Poly ÎČ-amino esters (pBAEs) are promising polymer-based vectors that form polyplexes with negatively charged oligonucleotides, enabling cell membrane uptake and gene delivery. pBAE backbone polymer chemistry, as well as terminal oligopeptide modifications, define cellular uptake and transfection efficiency in a given cell line, along with nanoparticle size, polydispersity and zeta potential. Moreover, uptake and transfection efficiency of a given polyplex formulation also vary from cell type to cell type. Therefore, finding the optimal formulation leading to high uptake in a new cell line is dictated by trial and error, and requires time and resources. Machine learning (ML) is an ideal in silico screening tool to learn the non-linearities of complex data sets, like the one presented herein, with the aim of predicting cellular internalisation of pBAE polyplexes. A library of pBAE nanoparticles was fabricated and the uptake studied in 4 different cell lines, on which various ML models were successfully trained. The best performing models were found to be gradient-boosted trees and neural networks. The gradient-boosted trees model was then analysed using SHapley Additive exPlanations, to interpret the model and gain an understanding into the important features and their impact on the predicted outcome

    CG‐SENSE revisited: Results from the first ISMRM reproducibility challenge

    Full text link
    Purpose: The aim of this work is to shed light on the issue of reproducibility in MR image reconstruction in the context of a challenge. Participants had to recreate the results of "Advances in sensitivity encoding with arbitrary k-space trajectories" by Pruessmann et al. METHODS: The task of the challenge was to reconstruct radially acquired multicoil k-space data (brain/heart) following the method in the original paper, reproducing its key figures. Results were compared to consolidated reference implementations created after the challenge, accounting for the two most common programming languages used in the submissions (Matlab/Python). Results: Visually, differences between submissions were small. Pixel-wise differences originated from image orientation, assumed field-of-view, or resolution. The reference implementations were in good agreement, both visually and in terms of image similarity metrics. Discussion and conclusion: While the description level of the published algorithm enabled participants to reproduce CG-SENSE in general, details of the implementation varied, for example, density compensation or Tikhonov regularization. Implicit assumptions about the data lead to further differences, emphasizing the importance of sufficient metadata accompanying open datasets. Defining reproducibility quantitatively turned out to be nontrivial for this image reconstruction challenge, in the absence of ground-truth results. Typical similarity measures like NMSE of SSIM were misled by image intensity scaling and outlier pixels. Thus, to facilitate reproducibility, researchers are encouraged to publish code and data alongside the original paper. Future methodological papers on MR image reconstruction might benefit from the consolidated reference implementations of CG-SENSE presented here, as a benchmark for methods comparison. Keywords: CG-SENSE; MRI; NUFFT; image reconstruction; nonuniform sampling; reproducibility
    corecore