16 research outputs found

    Anatomy-Aware Self-supervised Fetal MRI Synthesis from Unpaired Ultrasound Images

    Full text link
    Fetal brain magnetic resonance imaging (MRI) offers exquisite images of the developing brain but is not suitable for anomaly screening. For this ultrasound (US) is employed. While expert sonographers are adept at reading US images, MR images are much easier for non-experts to interpret. Hence in this paper we seek to produce images with MRI-like appearance directly from clinical US images. Our own clinical motivation is to seek a way to communicate US findings to patients or clinical professionals unfamiliar with US, but in medical image analysis such a capability is potentially useful, for instance, for US-MRI registration or fusion. Our model is self-supervised and end-to-end trainable. Specifically, based on an assumption that the US and MRI data share a similar anatomical latent space, we first utilise an extractor to determine shared latent features, which are then used for data synthesis. Since paired data was unavailable for our study (and rare in practice), we propose to enforce the distributions to be similar instead of employing pixel-wise constraints, by adversarial learning in both the image domain and latent space. Furthermore, we propose an adversarial structural constraint to regularise the anatomical structures between the two modalities during the synthesis. A cross-modal attention scheme is proposed to leverage non-local spatial correlations. The feasibility of the approach to produce realistic looking MR images is demonstrated quantitatively and with a qualitative evaluation compared to real fetal MR images.Comment: MICCAI-MLMI 201

    Self-Supervised Ultrasound to MRI Fetal Brain Image Synthesis

    Full text link
    Fetal brain magnetic resonance imaging (MRI) offers exquisite images of the developing brain but is not suitable for second-trimester anomaly screening, for which ultrasound (US) is employed. Although expert sonographers are adept at reading US images, MR images which closely resemble anatomical images are much easier for non-experts to interpret. Thus in this paper we propose to generate MR-like images directly from clinical US images. In medical image analysis such a capability is potentially useful as well, for instance for automatic US-MRI registration and fusion. The proposed model is end-to-end trainable and self-supervised without any external annotations. Specifically, based on an assumption that the US and MRI data share a similar anatomical latent space, we first utilise a network to extract the shared latent features, which are then used for MRI synthesis. Since paired data is unavailable for our study (and rare in practice), pixel-level constraints are infeasible to apply. We instead propose to enforce the distributions to be statistically indistinguishable, by adversarial learning in both the image domain and feature space. To regularise the anatomical structures between US and MRI during synthesis, we further propose an adversarial structural constraint. A new cross-modal attention technique is proposed to utilise non-local spatial information, by encouraging multi-modal knowledge fusion and propagation. We extend the approach to consider the case where 3D auxiliary information (e.g., 3D neighbours and a 3D location index) from volumetric data is also available, and show that this improves image synthesis. The proposed approach is evaluated quantitatively and qualitatively with comparison to real fetal MR images and other approaches to synthesis, demonstrating its feasibility of synthesising realistic MR images.Comment: IEEE Transactions on Medical Imaging 202

    The Future of Cardiac Mapping

    Get PDF

    Identification of cancer hallmarks in patients with non-metastatic colon cancer after surgical resection

    Get PDF
    Colon cancer is one of the most common cancers in the world, and the therapeutic workflow is dependent on the TNM staging system and the presence of clinical risk factors. However, in the case of patients with non-metastatic disease, evaluating the benefit of adjuvant chemotherapy is a clinical challenge. Radiomics could be seen as a non-invasive novel imaging biomarker able to outline tumor phenotype and to predict patient prognosis by analyzing preoperative medical images. Radiomics might provide decisional support for oncologists with the goal to reduce the number of arbitrary decisions in the emerging era of personalized medicine. To date, much evidence highlights the strengths of radiomics in cancer workup, but several aspects limit the use of radiomics methods as routine. The study aimed to develop a radiomic model able to identify high-risk colon cancer by analyzing pre-operative CT scans. The study population comprised 148 patients: 108 with non-metastatic colon cancer were retrospectively enrolled from January 2015 to June 2020, and 40 patients were used as the external validation cohort. The population was divided into two groups—High-risk and No-risk—following the presence of at least one high-risk clinical factor. All patients had baseline CT scans, and 3D cancer segmentation was performed on the portal phase by two expert radiologists using open-source software (3DSlicer v4.10.2). Among the 107 radiomic features extracted, stable features were selected to evaluate the inter-class correlation (ICC) (cut-off ICC > 0.8). Stable features were compared between the two groups (T-test or Mann–Whitney), and the significant features were selected for univariate and multivariate logistic regression to build a predictive radiomic model. The radiomic model was then validated with an external cohort. In total, 58/108 were classified as High-risk and 50/108 as No-risk. A total of 35 radiomic features were stable (0.81 ≤ ICC <  0.92). Among these, 28 features were significantly different between the two groups (p < 0.05), and only 9 features were selected to build the radiomic model. The radiomic model yielded an AUC of 0.73 in the internal cohort and 0.75 in the external cohort. In conclusion, the radiomic model could be seen as a performant, non-invasive imaging tool to properly stratify colon cancers with high-risk diseas

    Development and validation of real-time simulation of X-ray imaging with respiratory motion

    Get PDF
    International audienceWe present a framework that combines evolutionary optimisation, soft tissue modelling and ray tracing on GPU to simultaneously compute the respiratory motion and X-ray imaging in real-time. Our aim is to provide validated building blocks with high fidelity to closely match both the human physiology and the physics of X-rays. A CPU-based set of algorithms is presented to model organ behaviours during respiration. Soft tissue deformation is computed with an extension of the Chain Mail method. Rigid elements move according to kinematic laws. A GPU-based surface rendering method is proposed to compute the X-ray image using the Beer-Lambert law. It is provided as an open-source library. A quantitative validation study is provided to objectively assess the accuracy of both components: i) the respiration against anatomical data, and ii) the X-ray against the Beer-Lambert law and the results of Monte Carlo simulations. Our implementation can be used in various applications, such as interactive medical virtual environment to train percutaneous transhepatic cholangiography in interventional radiology, 2D/3D registration, computation of digitally reconstructed radiograph, simulation of 4D sinograms to test tomography reconstruction tools

    The state-of-the-art in ultrasound-guided spine interventions.

    Get PDF
    During the last two decades, intra-operative ultrasound (iUS) imaging has been employed for various surgical procedures of the spine, including spinal fusion and needle injections. Accurate and efficient registration of pre-operative computed tomography or magnetic resonance images with iUS images are key elements in the success of iUS-based spine navigation. While widely investigated in research, iUS-based spine navigation has not yet been established in the clinic. This is due to several factors including the lack of a standard methodology for the assessment of accuracy, robustness, reliability, and usability of the registration method. To address these issues, we present a systematic review of the state-of-the-art techniques for iUS-guided registration in spinal image-guided surgery (IGS). The review follows a new taxonomy based on the four steps involved in the surgical workflow that include pre-processing, registration initialization, estimation of the required patient to image transformation, and a visualization process. We provide a detailed analysis of the measurements in terms of accuracy, robustness, reliability, and usability that need to be met during the evaluation of a spinal IGS framework. Although this review is focused on spinal navigation, we expect similar evaluation criteria to be relevant for other IGS applications

    Intraoperative Navigation Systems for Image-Guided Surgery

    Get PDF
    Recent technological advancements in medical imaging equipment have resulted in a dramatic improvement of image accuracy, now capable of providing useful information previously not available to clinicians. In the surgical context, intraoperative imaging provides a crucial value for the success of the operation. Many nontrivial scientific and technical problems need to be addressed in order to efficiently exploit the different information sources nowadays available in advanced operating rooms. In particular, it is necessary to provide: (i) accurate tracking of surgical instruments, (ii) real-time matching of images from different modalities, and (iii) reliable guidance toward the surgical target. Satisfying all of these requisites is needed to realize effective intraoperative navigation systems for image-guided surgery. Various solutions have been proposed and successfully tested in the field of image navigation systems in the last ten years; nevertheless several problems still arise in most of the applications regarding precision, usability and capabilities of the existing systems. Identifying and solving these issues represents an urgent scientific challenge. This thesis investigates the current state of the art in the field of intraoperative navigation systems, focusing in particular on the challenges related to efficient and effective usage of ultrasound imaging during surgery. The main contribution of this thesis to the state of the art are related to: Techniques for automatic motion compensation and therapy monitoring applied to a novel ultrasound-guided surgical robotic platform in the context of abdominal tumor thermoablation. Novel image-fusion based navigation systems for ultrasound-guided neurosurgery in the context of brain tumor resection, highlighting their applicability as off-line surgical training instruments. The proposed systems, which were designed and developed in the framework of two international research projects, have been tested in real or simulated surgical scenarios, showing promising results toward their application in clinical practice

    Improvements in the registration of multimodal medical imaging : application to intensity inhomogeneity and partial volume corrections

    Get PDF
    Alignment or registration of medical images has a relevant role on clinical diagnostic and treatment decisions as well as in research settings. With the advent of new technologies for multimodal imaging, robust registration of functional and anatomical information is still a challenge, particular in small-animal imaging given the lesser structural content of certain anatomical parts, such as the brain, than in humans. Besides, patient-dependent and acquisition artefacts affecting the images information content further complicate registration, as is the case of intensity inhomogeneities (IIH) showing in MRI and the partial volume effect (PVE) attached to PET imaging. Reference methods exist for accurate image registration but their performance is severely deteriorated in situations involving little images Overlap. While several approaches to IIH and PVE correction exist these methods still do not guarantee or rely on robust registration. This Thesis focuses on overcoming current limitations af registration to enable novel IIH and PVE correction methods.El registre d'imatges mèdiques té un paper rellevant en les decisions de diagnòstic i tractament clíniques així com en la recerca. Amb el desenvolupament de noves tecnologies d'imatge multimodal, el registre robust d'informació funcional i anatòmica és encara avui un repte, en particular, en imatge de petit animal amb un menor contingut estructural que en humans de certes parts anatòmiques com el cervell. A més, els artefactes induïts pel propi pacient i per la tècnica d'adquisició que afecten el contingut d'informació de les imatges complica encara més el procés de registre. És el cas de les inhomogeneïtats d'intensitat (IIH) que apareixen a les RM i de l'efecte de volum parcial (PVE) característic en PET. Tot i que existeixen mètodes de referència pel registre acurat d'imatges la seva eficàcia es veu greument minvada en casos de poc solapament entre les imatges. De la mateixa manera, també existeixen mètodes per la correcció d'IIH i de PVE però que no garanteixen o que requereixen un registre robust. Aquesta tesi es centra en superar aquestes limitacions sobre el registre per habilitar nous mètodes per la correcció d'IIH i de PVE
    corecore