721 research outputs found

    Design of a multimodal rendering system

    Get PDF
    This paper addresses the rendering of aligned regular multimodal datasets. It presents a general framework of multimodal data fusion that includes several data merging methods. We also analyze the requirements of a rendering system able to provide these different fusion methods. On the basis of these requirements, we propose a novel design for a multimodal rendering system. The design has been implemented and proved showing to be efficient and flexible.Postprint (published version

    Self-Supervised Ultrasound to MRI Fetal Brain Image Synthesis

    Full text link
    Fetal brain magnetic resonance imaging (MRI) offers exquisite images of the developing brain but is not suitable for second-trimester anomaly screening, for which ultrasound (US) is employed. Although expert sonographers are adept at reading US images, MR images which closely resemble anatomical images are much easier for non-experts to interpret. Thus in this paper we propose to generate MR-like images directly from clinical US images. In medical image analysis such a capability is potentially useful as well, for instance for automatic US-MRI registration and fusion. The proposed model is end-to-end trainable and self-supervised without any external annotations. Specifically, based on an assumption that the US and MRI data share a similar anatomical latent space, we first utilise a network to extract the shared latent features, which are then used for MRI synthesis. Since paired data is unavailable for our study (and rare in practice), pixel-level constraints are infeasible to apply. We instead propose to enforce the distributions to be statistically indistinguishable, by adversarial learning in both the image domain and feature space. To regularise the anatomical structures between US and MRI during synthesis, we further propose an adversarial structural constraint. A new cross-modal attention technique is proposed to utilise non-local spatial information, by encouraging multi-modal knowledge fusion and propagation. We extend the approach to consider the case where 3D auxiliary information (e.g., 3D neighbours and a 3D location index) from volumetric data is also available, and show that this improves image synthesis. The proposed approach is evaluated quantitatively and qualitatively with comparison to real fetal MR images and other approaches to synthesis, demonstrating its feasibility of synthesising realistic MR images.Comment: IEEE Transactions on Medical Imaging 202

    HeMIS: Hetero-Modal Image Segmentation

    Full text link
    We introduce a deep learning image segmentation framework that is extremely robust to missing imaging modalities. Instead of attempting to impute or synthesize missing data, the proposed approach learns, for each modality, an embedding of the input image into a single latent vector space for which arithmetic operations (such as taking the mean) are well defined. Points in that space, which are averaged over modalities available at inference time, can then be further processed to yield the desired segmentation. As such, any combinatorial subset of available modalities can be provided as input, without having to learn a combinatorial number of imputation models. Evaluated on two neurological MRI datasets (brain tumors and MS lesions), the approach yields state-of-the-art segmentation results when provided with all modalities; moreover, its performance degrades remarkably gracefully when modalities are removed, significantly more so than alternative mean-filling or other synthesis approaches.Comment: Accepted as an oral presentation at MICCAI 201

    Synthesizing pseudo-T2w images to recapture missing data in neonatal neuroimaging with applications in rs-fMRI

    Get PDF
    T1- and T2-weighted (T1w and T2w) images are essential for tissue classification and anatomical localization in Magnetic Resonance Imaging (MRI) analyses. However, these anatomical data can be challenging to acquire in non-sedated neonatal cohorts, which are prone to high amplitude movement and display lower tissue contrast than adults. As a result, one of these modalities may be missing or of such poor quality that they cannot be used for accurate image processing, resulting in subject loss. While recent literature attempts to overcome these issues in adult populations using synthetic imaging approaches, evaluation of the efficacy of these methods in pediatric populations and the impact of these techniques in conventional MR analyses has not been performed. In this work, we present two novel methods to generate pseudo-T2w images: the first is based in deep learning and expands upon previous models to 3D imaging without the requirement of paired data, the second is based in nonlinear multi-atlas registration providing a computationally lightweight alternative. We demonstrate the anatomical accuracy of pseudo-T2w images and their efficacy in existing MR processing pipelines in two independent neonatal cohorts. Critically, we show that implementing these pseudo-T2w methods in resting-state functional MRI analyses produces virtually identical functional connectivity results when compared to those resulting from T2w images, confirming their utility in infant MRI studies for salvaging otherwise lost subject data

    Automating the multimodal analysis of musculoskeletal imaging in the presence of hip implants

    Get PDF
    In patients treated with hip arthroplasty, the muscular condition and presence of inflammatory reactions are assessed using magnetic resonance imaging (MRI). As MRI lacks contrast for bony structures, computed tomography (CT) is preferred for clinical evaluation of bone tissue and orthopaedic surgical planning. Combining the complementary information of MRI and CT could improve current clinical practice for diagnosis, monitoring and treatment planning. In particular, the different contrast of these modalities could help better quantify the presence of fatty infiltration to characterise muscular condition after hip replacement. In this thesis, I developed automated processing tools for the joint analysis of CT and MR images of patients with hip implants. In order to combine the multimodal information, a novel nonlinear registration algorithm was introduced, which imposes rigidity constraints on bony structures to ensure realistic deformation. I implemented and thoroughly validated a fully automated framework for the multimodal segmentation of healthy and pathological musculoskeletal structures, as well as implants. This framework combines the proposed registration algorithm with tailored image quality enhancement techniques and a multi-atlas-based segmentation approach, providing robustness against the large population anatomical variability and the presence of noise and artefacts in the images. The automation of muscle segmentation enabled the derivation of a measure of fatty infiltration, the Intramuscular Fat Fraction, useful to characterise the presence of muscle atrophy. The proposed imaging biomarker was shown to strongly correlate with the atrophy radiological score currently used in clinical practice. Finally, a preliminary work on multimodal metal artefact reduction, using an unsupervised deep learning strategy, showed promise for improving the postprocessing of CT and MR images heavily corrupted by metal artefact. This work represents a step forward towards the automation of image analysis in hip arthroplasty, supporting and quantitatively informing the decision-making process about patient’s management

    Iterative framework for the joint segmentation and CT synthesis of MR images: application to MRI-only radiotherapy treatment planning.

    Get PDF
    To tackle the problem of magnetic resonance imaging (MRI)-only radiotherapy treatment planning (RTP), we propose a multi-atlas information propagation scheme that jointly segments organs and generates pseudo x-ray computed tomography (CT) data from structural MR images (T1-weighted and T2-weighted). As the performance of the method strongly depends on the quality of the atlas database composed of multiple sets of aligned MR, CT and segmented images, we also propose a robust way of registering atlas MR and CT images, which combines structure-guided registration, and CT and MR image synthesis. We first evaluated the proposed framework in terms of segmentation and CT synthesis accuracy on 15 subjects with prostate cancer. The segmentations obtained with the proposed method were compared using the Dice score coefficient (DSC) to the manual segmentations. Mean DSCs of 0.73, 0.90, 0.77 and 0.90 were obtained for the prostate, bladder, rectum and femur heads, respectively. The mean absolute error (MAE) and the mean error (ME) were computed between the reference CTs (non-rigidly aligned to the MRs) and the pseudo CTs generated with the proposed method. The MAE was on average [Formula: see text] HU and the ME [Formula: see text] HU. We then performed a dosimetric evaluation by re-calculating plans on the pseudo CTs and comparing them to the plans optimised on the reference CTs. We compared the cumulative dose volume histograms (DVH) obtained for the pseudo CTs to the DVH obtained for the reference CTs in the planning target volume (PTV) located in the prostate, and in the organs at risk at different DVH points. We obtained average differences of [Formula: see text] in the PTV for [Formula: see text], and between [Formula: see text] and 0.05% in the PTV, bladder, rectum and femur heads for D mean and [Formula: see text]. Overall, we demonstrate that the proposed framework is able to automatically generate accurate pseudo CT images and segmentations in the pelvic region, potentially bypassing the need for CT scan for accurate RTP

    An Investigation of Methods for CT Synthesis in MR-only Radiotherapy

    Get PDF
    • …
    corecore