70 research outputs found

    CINENet: deep learning-based 3D cardiac CINE MRI reconstruction with multi-coil complex-valued 4D spatio-temporal convolutions

    Get PDF
    Cardiac CINE magnetic resonance imaging is the gold-standard for the assessment of cardiac function. Imaging accelerations have shown to enable 3D CINE with left ventricular (LV) coverage in a single breath-hold. However, 3D imaging remains limited to anisotropic resolution and long reconstruction times. Recently deep learning has shown promising results for computationally efficient reconstructions of highly accelerated 2D CINE imaging. In this work, we propose a novel 4D (3D + time) deep learning-based reconstruction network, termed 4D CINENet, for prospectively undersampled 3D Cartesian CINE imaging. CINENet is based on (3 + 1)D complex-valued spatio-temporal convolutions and multi-coil data processing. We trained and evaluated the proposed CINENet on in-house acquired 3D CINE data of 20 healthy subjects and 15 patients with suspected cardiovascular disease. The proposed CINENet network outperforms iterative reconstructions in visual image quality and contrast (+ 67% improvement). We found good agreement in LV function (bias ± 95% confidence) in terms of end-systolic volume (0 ± 3.3 ml), end-diastolic volume (- 0.4 ± 2.0 ml) and ejection fraction (0.1 ± 3.2%) compared to clinical gold-standard 2D CINE, enabling single breath-hold isotropic 3D CINE in less than 10 s scan and ~ 5 s reconstruction time

    Uni-COAL: A Unified Framework for Cross-Modality Synthesis and Super-Resolution of MR Images

    Full text link
    Cross-modality synthesis (CMS), super-resolution (SR), and their combination (CMSR) have been extensively studied for magnetic resonance imaging (MRI). Their primary goals are to enhance the imaging quality by synthesizing the desired modality and reducing the slice thickness. Despite the promising synthetic results, these techniques are often tailored to specific tasks, thereby limiting their adaptability to complex clinical scenarios. Therefore, it is crucial to build a unified network that can handle various image synthesis tasks with arbitrary requirements of modality and resolution settings, so that the resources for training and deploying the models can be greatly reduced. However, none of the previous works is capable of performing CMS, SR, and CMSR using a unified network. Moreover, these MRI reconstruction methods often treat alias frequencies improperly, resulting in suboptimal detail restoration. In this paper, we propose a Unified Co-Modulated Alias-free framework (Uni-COAL) to accomplish the aforementioned tasks with a single network. The co-modulation design of the image-conditioned and stochastic attribute representations ensures the consistency between CMS and SR, while simultaneously accommodating arbitrary combinations of input/output modalities and thickness. The generator of Uni-COAL is also designed to be alias-free based on the Shannon-Nyquist signal processing framework, ensuring effective suppression of alias frequencies. Additionally, we leverage the semantic prior of Segment Anything Model (SAM) to guide Uni-COAL, ensuring a more authentic preservation of anatomical structures during synthesis. Experiments on three datasets demonstrate that Uni-COAL outperforms the alternatives in CMS, SR, and CMSR tasks for MR images, which highlights its generalizability to wide-range applications

    Volumetric MRI Reconstruction from 2D Slices in the Presence of Motion

    Get PDF
    Despite recent advances in acquisition techniques and reconstruction algorithms, magnetic resonance imaging (MRI) remains challenging in the presence of motion. To mitigate this, ultra-fast two-dimensional (2D) MRI sequences are often used in clinical practice to acquire thick, low-resolution (LR) 2D slices to reduce in-plane motion. The resulting stacks of thick 2D slices typically provide high-quality visualizations when viewed in the in-plane direction. However, the low spatial resolution in the through-plane direction in combination with motion commonly occurring between individual slice acquisitions gives rise to stacks with overall limited geometric integrity. In further consequence, an accurate and reliable diagnosis may be compromised when using such motion-corrupted, thick-slice MRI data. This thesis presents methods to volumetrically reconstruct geometrically consistent, high-resolution (HR) three-dimensional (3D) images from motion-corrupted, possibly sparse, low-resolution 2D MR slices. It focuses on volumetric reconstructions techniques using inverse problem formulations applicable to a broad field of clinical applications in which associated motion patterns are inherently different, but the use of thick-slice MR data is current clinical practice. In particular, volumetric reconstruction frameworks are developed based on slice-to-volume registration with inter-slice transformation regularization and robust, complete-outlier rejection for the reconstruction step that can either avoid or efficiently deal with potential slice-misregistrations. Additionally, this thesis describes efficient Forward-Backward Splitting schemes for image registration for any combination of differentiable (not necessarily convex) similarity measure and convex (not necessarily smooth) regularization with a tractable proximal operator. Experiments are performed on fetal and upper abdominal MRI, and on historical, printed brain MR films associated with a uniquely long-term study dating back to the 1980s. The results demonstrate the broad applicability of the presented frameworks to achieve robust reconstructions with the potential to improve disease diagnosis and patient management in clinical practice

    Restauration d'images en IRM anatomique pour l'étude préclinique des marqueurs du vieillissement cérébral

    Get PDF
    Les maladies neurovasculaires et neurodégénératives liées à l'âge sont en forte augmentation. Alors que ces changements pathologiques montrent des effets sur le cerveau avant l'apparition de symptômes cliniques, une meilleure compréhension du processus de vieillissement normal du cerveau aidera à distinguer l'impact des pathologies connues sur la structure régionale du cerveau. En outre, la connaissance des schémas de rétrécissement du cerveau dans le vieillissement normal pourrait conduire à une meilleure compréhension de ses causes et peut-être à des interventions réduisant la perte de fonctions cérébrales associée à l'atrophie cérébrale. Par conséquent, ce projet de thèse vise à détecter les biomarqueurs du vieillissement normal et pathologique du cerveau dans un modèle de primate non humain, le singe marmouset (Callithrix Jacchus), qui possède des caractéristiques anatomiques plus proches de celles des humains que de celles des rongeurs. Cependant, les changements structurels (par exemple, de volumes, d'épaisseur corticale) qui peuvent se produire au cours de leur vie adulte peuvent être minimes à l'échelle de l'observation. Dans ce contexte, il est essentiel de disposer de techniques d'observation offrant un contraste et une résolution spatiale suffisamment élevés et permettant des évaluations détaillées des changements morphométriques du cerveau associé au vieillissement. Cependant, l'imagerie de petits cerveaux dans une plateforme IRM 3T dédiée à l'homme est une tâche difficile car la résolution spatiale et le contraste obtenus sont insuffisants par rapport à la taille des structures anatomiques observées et à l'échelle des modifications attendues. Cette thèse vise à développer des méthodes de restauration d'image pour les images IRM précliniques qui amélioreront la robustesse des algorithmes de segmentation. L'amélioration de la résolution spatiale des images à un rapport signal/bruit constant limitera les effets de volume partiel dans les voxels situés à la frontière entre deux structures et permettra une meilleure segmentation tout en augmentant la reproductibilité des résultats. Cette étape d'imagerie computationnelle est cruciale pour une analyse morphométrique longitudinale fiable basée sur les voxels et l'identification de marqueurs anatomiques du vieillissement cérébral en suivant les changements de volume dans la matière grise, la matière blanche et le liquide cérébral.Age-related neurovascular and neurodegenerative diseases are increasing significantly. While such pathological changes show effects on the brain before clinical symptoms appear, a better understanding of the normal aging brain process will help distinguish known pathologies' impact on regional brain structure. Furthermore, knowledge of the patterns of brain shrinkage in normal aging could lead to a better understanding of its causes and perhaps to interventions reducing the loss of brain functions. Therefore, this thesis project aims to detect normal and pathological brain aging biomarkers in a non-human primate model, the marmoset monkey (Callithrix Jacchus) which possesses anatomical characteristics more similar to humans than rodents. However, structural changes (e.g., volumes, cortical thickness) that may occur during their adult life may be minimal with respect to the scale of observation. In this context, it is essential to have observation techniques that offer sufficiently high contrast and spatial resolution and allow detailed assessments of the morphometric brain changes associated with aging. However, imaging small brains in a 3T MRI platform dedicated to humans is a challenging task because the spatial resolution and the contrast obtained are insufficient compared to the size of the anatomical structures observed and the scale of the xpected changes with age. This thesis aims to develop image restoration methods for preclinical MR images that will improve the robustness of the segmentation algorithms. Improving the resolution of the images at a constant signal-to-noise ratio will limit the effects of partial volume in voxels located at the border between two structures and allow a better segmentation while increasing the results' reproducibility. This computational imaging step is crucial for a reliable longitudinal voxel-based morphometric analysis and for the identification of anatomical markers of brain aging by following the volume changes in gray matter, white matter and cerebrospinal fluid

    Learning Regularization Parameter-Maps for Variational Image Reconstruction using Deep Neural Networks and Algorithm Unrolling

    Get PDF
    We introduce a method for fast estimation of data-adapted, spatio-temporally dependent regularization parameter-maps for variational image reconstruction, focusing on total variation (TV)-minimization. Our approach is inspired by recent developments in algorithm unrolling using deep neural networks (NNs), and relies on two distinct sub-networks. The first sub-network estimates the regularization parameter-map from the input data. The second sub-network unrolls T iterations of an iterative algorithm which approximately solves the corresponding TV-minimization problem incorporating the previously estimated regularization parameter-map. The overall network is trained end-to-end in a supervised learning fashion using pairs of clean-corrupted data but crucially without the need of having access to labels for the optimal regularization parameter-maps. We prove consistency of the unrolled scheme by showing that the unrolled energy functional used for the supervised learning Γ-converges as T tends to infinity, to the corresponding functional that incorporates the exact solution map of the TV-minimization problem. We apply and evaluate our method on a variety of large scale and dynamic imaging problems in which the automatic computation of such parameters has been so far challenging: 2D dynamic cardiac MRI reconstruction, quantitative brain MRI reconstruction, low-dose CT and dynamic image denoising. The proposed method consistently improves the TV-reconstructions using scalar parameters and the obtained parameter-maps adapt well to each imaging problem and data by leading to the preservation of detailed features. Although the choice of the regularization parameter-maps is data-driven and based on NNs, the proposed algorithm is entirely interpretable since it inherits the properties of the respective iterative reconstruction method from which the network is implicitly defined

    ROBUST DEEP LEARNING METHODS FOR SOLVING INVERSE PROBLEMS IN MEDICAL IMAGING

    Get PDF
    The medical imaging field has a long history of incorporating machine learning algorithms to address inverse problems in image acquisition and analysis. With the impressive successes of deep neural networks on natural images, we seek to answer the obvious question: do these successes also transfer to the medical image domain? The answer may seem straightforward on the surface. Tasks like image-to-image transformation, segmentation, detection, etc., have direct applications for medical images. For example, metal artifact reduction for Computed Tomography (CT) and reconstruction from undersampled k-space signal for Magnetic Resonance (MR) imaging can be formulated as an image-to-image transformation; lesion/tumor detection and segmentation are obvious applications for higher level vision tasks. While these tasks may be similar in formulation, many practical constraints and requirements exist in solving these tasks for medical images. Patient data is highly sensitive and usually only accessible from individual institutions. This creates constraints on the available groundtruth, dataset size, and computational resources in these institutions to train performant models. Due to the mission-critical nature in healthcare applications, requirements such as performance robustness and speed are also stringent. As such, the big-data, dense-computation, supervised learning paradigm in mainstream deep learning is often insufficient to address these situations. In this dissertation, we investigate ways to benefit from the powerful representational capacity of deep neural networks while still satisfying the above-mentioned constraints and requirements. The first part of this dissertation focuses on adapting supervised learning to account for variations such as different medical image modality, image quality, architecture designs, tasks, etc. The second part of this dissertation focuses on improving model robustness on unseen data through domain adaptation, which ameliorates performance degradation due to distribution shifts. The last part of this dissertation focuses on self-supervised learning and learning from synthetic data with a focus in tomographic imaging; this is essential in many situations where the desired groundtruth may not be accessible
    • …
    corecore