1,582 research outputs found

    Self-Supervised Ultrasound to MRI Fetal Brain Image Synthesis

    Full text link
    Fetal brain magnetic resonance imaging (MRI) offers exquisite images of the developing brain but is not suitable for second-trimester anomaly screening, for which ultrasound (US) is employed. Although expert sonographers are adept at reading US images, MR images which closely resemble anatomical images are much easier for non-experts to interpret. Thus in this paper we propose to generate MR-like images directly from clinical US images. In medical image analysis such a capability is potentially useful as well, for instance for automatic US-MRI registration and fusion. The proposed model is end-to-end trainable and self-supervised without any external annotations. Specifically, based on an assumption that the US and MRI data share a similar anatomical latent space, we first utilise a network to extract the shared latent features, which are then used for MRI synthesis. Since paired data is unavailable for our study (and rare in practice), pixel-level constraints are infeasible to apply. We instead propose to enforce the distributions to be statistically indistinguishable, by adversarial learning in both the image domain and feature space. To regularise the anatomical structures between US and MRI during synthesis, we further propose an adversarial structural constraint. A new cross-modal attention technique is proposed to utilise non-local spatial information, by encouraging multi-modal knowledge fusion and propagation. We extend the approach to consider the case where 3D auxiliary information (e.g., 3D neighbours and a 3D location index) from volumetric data is also available, and show that this improves image synthesis. The proposed approach is evaluated quantitatively and qualitatively with comparison to real fetal MR images and other approaches to synthesis, demonstrating its feasibility of synthesising realistic MR images.Comment: IEEE Transactions on Medical Imaging 202

    Ultrasound image processing in the evaluation of labor induction failure risk

    Get PDF
    Labor induction is defined as the artificial stimulation of uterine contractions for the purpose of vaginal birth. Induction is prescribed for medical and elective reasons. Success in labor induction procedures is related to vaginal delivery. Cesarean section is one of the potential risks of labor induction as it occurs in about 20% of the inductions. A ripe cervix (soft and distensible) is needed for a successful labor. During the ripening cervical, tissues experience micro structural changes: collagen becomes disorganized and water content increases. These changes will affect the interaction between cervical tissues and sound waves during ultrasound transvaginal scanning and will be perceived as gray level intensity variations in the echographic image. Texture analysis can be used to analyze these variations and provide a means to evaluate cervical ripening in a non-invasive way

    U-Net and its variants for medical image segmentation: theory and applications

    Full text link
    U-net is an image segmentation technique developed primarily for medical image analysis that can precisely segment images using a scarce amount of training data. These traits provide U-net with a very high utility within the medical imaging community and have resulted in extensive adoption of U-net as the primary tool for segmentation tasks in medical imaging. The success of U-net is evident in its widespread use in all major image modalities from CT scans and MRI to X-rays and microscopy. Furthermore, while U-net is largely a segmentation tool, there have been instances of the use of U-net in other applications. As the potential of U-net is still increasing, in this review we look at the various developments that have been made in the U-net architecture and provide observations on recent trends. We examine the various innovations that have been made in deep learning and discuss how these tools facilitate U-net. Furthermore, we look at image modalities and application areas where U-net has been applied.Comment: 42 pages, in IEEE Acces

    MedicalSeg: a medical GUI application for image segmentation management

    Get PDF
    In the field of medical imaging, the division of an image into meaningful structures using image segmentation is an essential step for pre-processing analysis. Many studies have been carried out to solve the general problem of the evaluation of image segmentation results. One of the main focuses in the computer vision field is based on artificial intelligence algorithms for segmentation and classification, including machine learning and deep learning approaches. The main drawback of supervised segmentation approaches is that a large dataset of ground truth validated by medical experts is required. In this sense, many research groups have developed their segmentation approaches according to their specific needs. However, a generalised application aimed at visualizing, assessing and comparing the results of different methods facilitating the generation of a ground-truth repository is not found in recent literature. In this paper, a new graphical user interface application (MedicalSeg) for the management of medical imaging based on pre-processing and segmentation is presented. The objective is twofold, first to create a test platform for comparing segmentation approaches, and secondly to generate segmented images to create ground truths that can then be used for future purposes as artificial intelligence tools. An experimental demonstration and performance analysis discussion are presented in this paper.Peer ReviewedPostprint (published version

    Multi-Modality Automatic Lung Tumor Segmentation Method Using Deep Learning and Radiomics

    Get PDF
    Delineation of the tumor volume is the initial and fundamental step in the radiotherapy planning process. The current clinical practice of manual delineation is time-consuming and suffers from observer variability. This work seeks to develop an effective automatic framework to produce clinically usable lung tumor segmentations. First, to facilitate the development and validation of our methodology, an expansive database of planning CTs, diagnostic PETs, and manual tumor segmentations was curated, and an image registration and preprocessing pipeline was established. Then a deep learning neural network was constructed and optimized to utilize dual-modality PET and CT images for lung tumor segmentation. The feasibility of incorporating radiomics and other mechanisms such as a tumor volume-based stratification scheme for training/validation/testing were investigated to improve the segmentation performance. The proposed methodology was evaluated both quantitatively with similarity metrics and clinically with physician reviews. In addition, external validation with an independent database was also conducted. Our work addressed some of the major limitations that restricted clinical applicability of the existing approaches and produced automatic segmentations that were consistent with the manually contoured ground truth and were highly clinically-acceptable according to both the quantitative and clinical evaluations. Both novel approaches of implementing a tumor volume-based training/validation/ testing stratification strategy as well as incorporating voxel-wise radiomics feature images were shown to improve the segmentation performance. The results showed that the proposed method was effective and robust, producing automatic lung tumor segmentations that could potentially improve both the quality and consistency of manual tumor delineation

    Deep learning-based fully automatic segmentation of the maxillary sinus on cone-beam computed tomographic images

    Get PDF
    The detection of maxillary sinus wall is important in dental fields such as implant surgery, tooth extraction, and odontogenic disease diagnosis. The accurate segmentation of the maxillary sinus is required as a cornerstone for diagnosis and treatment planning. This study proposes a deep learning-based method for fully automatic segmentation of the maxillary sinus, including clear or hazy states, on cone-beam computed tomographic (CBCT) images. A model for segmentation of the maxillary sinuses was developed using U-Net, a convolutional neural network, and a total of 19,350 CBCT images were used from 90 maxillary sinuses (34 clear sinuses, 56 hazy sinuses). Post-processing to eliminate prediction errors of the U-Net segmentation results increased the accuracy. The average prediction results of U-Net were a dice similarity coefficient (DSC) of 0.9090 ± 0.1921 and a Hausdorff distance (HD) of 2.7013 ± 4.6154. After post-processing, the average results improved to a DSC of 0.9099 ± 0.1914 and an HD of 2.1470 ± 2.2790. The proposed deep learning model with post-processing showed good performance for clear and hazy maxillary sinus segmentation. This model has the potential to help dental clinicians with maxillary sinus segmentation, yielding equivalent accuracy in a variety of cases.ope

    U-net and its variants for medical image segmentation: A review of theory and applications

    Get PDF
    U-net is an image segmentation technique developed primarily for image segmentation tasks. These traits provide U-net with a high utility within the medical imaging community and have resulted in extensive adoption of U-net as the primary tool for segmentation tasks in medical imaging. The success of U-net is evident in its widespread use in nearly all major image modalities, from CT scans and MRI to Xrays and microscopy. Furthermore, while U-net is largely a segmentation tool, there have been instances of the use of U-net in other applications. Given that U-net’s potential is still increasing, this narrative literature review examines the numerous developments and breakthroughs in the U-net architecture and provides observations on recent trends. We also discuss the many innovations that have advanced in deep learning and discuss how these tools facilitate U-net. In addition, we review the different image modalities and application areas that have been enhanced by U-net
    • 

    corecore