119 research outputs found

    3D bi-directional transformer U-Net for medical image segmentation

    Get PDF
    As one of the popular deep learning methods, deep convolutional neural networks (DCNNs) have been widely adopted in segmentation tasks and have received positive feedback. However, in segmentation tasks, DCNN-based frameworks are known for their incompetence in dealing with global relations within imaging features. Although several techniques have been proposed to enhance the global reasoning of DCNN, these models are either not able to gain satisfying performances compared with traditional fully-convolutional structures or not capable of utilizing the basic advantages of CNN-based networks (namely the ability of local reasoning). In this study, compared with current attempts to combine FCNs and global reasoning methods, we fully extracted the ability of self-attention by designing a novel attention mechanism for 3D computation and proposed a new segmentation framework (named 3DTU) for three-dimensional medical image segmentation tasks. This new framework processes images in an end-to-end manner and executes 3D computation on both the encoder side (which contains a 3D transformer) and the decoder side (which is based on a 3D DCNN). We tested our framework on two independent datasets that consist of 3D MRI and CT images. Experimental results clearly demonstrate that our method outperforms several state-of-the-art segmentation methods in various metrics

    Brain Tumor Segmentation of MRI Images Using Processed Image Driven U-Net Architecture

    Get PDF
    Brain tumor segmentation seeks to separate healthy tissue from tumorous regions. This is an essential step in diagnosis and treatment planning to maximize the likelihood of successful treatment. Magnetic resonance imaging (MRI) provides detailed information about brain tumor anatomy, making it an important tool for effective diagnosis which is requisite to replace the existing manual detection system where patients rely on the skills and expertise of a human. In order to solve this problem, a brain tumor segmentation & detection system is proposed where experiments are tested on the collected BraTS 2018 dataset. This dataset contains four different MRI modalities for each patient as T1, T2, T1Gd, and FLAIR, and as an outcome, a segmented image and ground truth of tumor segmentation, i.e., class label, is provided. A fully automatic methodology to handle the task of segmentation of gliomas in pre-operative MRI scans is developed using a U-Net-based deep learning model. The first step is to transform input image data, which is further processed through various techniques—subset division, narrow object region, category brain slicing, watershed algorithm, and feature scaling was done. All these steps are implied before entering data into the U-Net Deep learning model. The U-Net Deep learning model is used to perform pixel label segmentation on the segment tumor region. The algorithm reached high-performance accuracy on the BraTS 2018 training, validation, as well as testing dataset. The proposed model achieved a dice coefficient of 0.9815, 0.9844, 0.9804, and 0.9954 on the testing dataset for sets HGG-1, HGG-2, HGG-3, and LGG-1, respectively

    Image Processing and Analysis for Preclinical and Clinical Applications

    Get PDF
    Radiomics is one of the most successful branches of research in the field of image processing and analysis, as it provides valuable quantitative information for the personalized medicine. It has the potential to discover features of the disease that cannot be appreciated with the naked eye in both preclinical and clinical studies. In general, all quantitative approaches based on biomedical images, such as positron emission tomography (PET), computed tomography (CT) and magnetic resonance imaging (MRI), have a positive clinical impact in the detection of biological processes and diseases as well as in predicting response to treatment. This Special Issue, “Image Processing and Analysis for Preclinical and Clinical Applications”, addresses some gaps in this field to improve the quality of research in the clinical and preclinical environment. It consists of fourteen peer-reviewed papers covering a range of topics and applications related to biomedical image processing and analysis

    Machine Learning and Quantitative Imaging for the Management of Brain Metastasis

    Get PDF
    Significantly affecting patients’ clinical course and quality of life, a growing number of cancer cases are diagnosed with brain metastasis annually. Although a considerable percentage of cancer patients survive for several years if the disease is discovered at an early stage while it is still localized, when the tumour is metastasized to the brain, the median survival decreases considerably. Early detection followed by precise and effective treatment of brain metastasis may lead to improved patient survival and quality of life. A main challenge to prescribe an effective treatment regimen is the variability of tumour response to treatments, e.g., radiotherapy as a main treatment option for brain metastasis, despite similar cancer therapy, due to many patient-related factors. Stratifying patients based on their predicted response and consequently assessing their response to therapy are challenging yet crucial tasks. While risk assessment models with standard clinical attributes have been proposed for patient stratification, the imaging data acquired for these patients as a part of the standard-of-care are not computationally analyzed or directly incorporated in these models. Further, therapy response monitoring and assessment is a cumbersome task for patients with brain metastasis that requires longitudinal tumour delineation on MRI volumes before and at multiple follow-up sessions after treatment. This is aggravated by the time-sensitive nature of the disease. In an effort to address these challenges, a number of machine learning frameworks and computational techniques in areas of automatic tumour segmentation, radiotherapy outcome assessment, and therapy outcome prediction have been introduced and investigated in this dissertation. Powered by advanced machine learning algorithms, a complex attention-guided segmentation framework is introduced and investigated for segmenting brain tumours on serial MRI. The experimental results demonstrate that the proposed framework can achieve a dice score of 91.5% and 84.1% to 87.4% on the baseline and follow-up scans, respectively. This framework is then applied in a proposed system that follows standard clinical criteria based on changes in tumour size at post-treatment to assess tumour response to radiotherapy automatically. The system demonstrates a very good agreement with expert clinicians in detecting local response, with an accuracy of over 90%. Next, innovative machine-learning-based solutions are proposed and investigated for radiotherapy outcome prediction before or early after therapy, using MRI radiomic models and novel deep learning architectures that analyze treatment-planning MRI with and without standard clinical attributes. The developed models demonstrate an accuracy of up to 82.5% in predicting radiotherapy outcome before the treatment initiation. The ground-breaking machine learning platforms presented in this dissertation along with the promising results obtained in the conducted experiments are steps forward towards realizing important decision support tools for oncologists and radiologists and, can eventually, pave the way towards the personalized therapeutics paradigm for cancer patient

    Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries

    Get PDF
    This two-volume set LNCS 12962 and 12963 constitutes the thoroughly refereed proceedings of the 7th International MICCAI Brainlesion Workshop, BrainLes 2021, as well as the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge, the Federated Tumor Segmentation (FeTS) Challenge, the Cross-Modality Domain Adaptation (CrossMoDA) Challenge, and the challenge on Quantification of Uncertainties in Biomedical Image Quantification (QUBIQ). These were held jointly at the 23rd Medical Image Computing for Computer Assisted Intervention Conference, MICCAI 2020, in September 2021. The 91 revised papers presented in these volumes were selected form 151 submissions. Due to COVID-19 pandemic the conference was held virtually. This is an open access book

    Selective Compression of Medical Images via Intelligent Segmentation and 3D-SPIHT Coding

    Get PDF
    ABSTRACT SELECTIVE COMPRESSION OF MEDICAL IMAGES VIA INTELLIGENT SEGMENTATION AND 3D-SPIHT CODING by Bohan Fan The University of Wisconsin-Milwaukee, 2018 Under the Supervision of Professor Zeyun Yu With increasingly high resolutions of 3D volumetric medical images being widely used in clinical patient treatments, efficient image compression techniques have become in great demand due to the cost in storage and time for transmission. While various algorithms are available, the conflicts between high compression rate and the downgraded quality of the images can partially be harmonized by using the region of interest (ROI) coding technique. Instead of compressing the entire image, we can segment the image by critical diagnosis zone (the ROI zone) and background zone, and apply lossless compression or low compression rate to the former and high compression rate to the latter, without losing much clinically important information. In this thesis, we explore a medical image transmitting process that utilizes a deep learning network, called 3D-Unet to segment the region of interest area of volumetric images and 3D-SPIHT algorithm to encode the images for compression, which can be potentially used in medical data sharing scenario. In our experiments, we train a 3D-Unet on a dataset of spine images with their label ground truth, and use the trained model to extract the vertebral bodies of testing data. The segmented vertebral regions are dilated to generate the region of interest, which are subject to the 3D-SPIHT algorithm with low compress ratio while the rest of the image (background) is coded with high compress ratio to achieve an excellent balance of image quality in region of interest and high compression ratio elsewhere

    Recent Progress in Transformer-based Medical Image Analysis

    Full text link
    The transformer is primarily used in the field of natural language processing. Recently, it has been adopted and shows promise in the computer vision (CV) field. Medical image analysis (MIA), as a critical branch of CV, also greatly benefits from this state-of-the-art technique. In this review, we first recap the core component of the transformer, the attention mechanism, and the detailed structures of the transformer. After that, we depict the recent progress of the transformer in the field of MIA. We organize the applications in a sequence of different tasks, including classification, segmentation, captioning, registration, detection, enhancement, localization, and synthesis. The mainstream classification and segmentation tasks are further divided into eleven medical image modalities. A large number of experiments studied in this review illustrate that the transformer-based method outperforms existing methods through comparisons with multiple evaluation metrics. Finally, we discuss the open challenges and future opportunities in this field. This task-modality review with the latest contents, detailed information, and comprehensive comparison may greatly benefit the broad MIA community.Comment: Computers in Biology and Medicine Accepte

    Recent Advances in Machine Learning Applied to Ultrasound Imaging

    Get PDF
    Machine learning (ML) methods are pervading an increasing number of fields of application because of their capacity to effectively solve a wide variety of challenging problems. The employment of ML techniques in ultrasound imaging applications started several years ago but the scientific interest in this issue has increased exponentially in the last few years. The present work reviews the most recent (2019 onwards) implementations of machine learning techniques for two of the most popular ultrasound imaging fields, medical diagnostics and non-destructive evaluation. The former, which covers the major part of the review, was analyzed by classifying studies according to the human organ investigated and the methodology (e.g., detection, segmentation, and/or classification) adopted, while for the latter, some solutions to the detection/classification of material defects or particular patterns are reported. Finally, the main merits of machine learning that emerged from the study analysis are summarized and discussed. © 2022 by the authors. Licensee MDPI, Basel, Switzerland

    3D bi-directional transformer U-Net for medical image segmentation

    Get PDF
    As one of the popular deep learning methods, deep convolutional neural networks (DCNNs) have been widely adopted in segmentation tasks and have received positive feedback. However, in segmentation tasks, DCNN-based frameworks are known for their incompetence in dealing with global relations within imaging features. Although several techniques have been proposed to enhance the global reasoning of DCNN, these models are either not able to gain satisfying performances compared with traditional fully-convolutional structures or not capable of utilizing the basic advantages of CNN-based networks (namely the ability of local reasoning). In this study, compared with current attempts to combine FCNs and global reasoning methods, we fully extracted the ability of self-attention by designing a novel attention mechanism for 3D computation and proposed a new segmentation framework (named 3DTU) for three-dimensional medical image segmentation tasks. This new framework processes images in an end-to-end manner and executes 3D computation on both the encoder side (which contains a 3D transformer) and the decoder side (which is based on a 3D DCNN). We tested our framework on two independent datasets that consist of 3D MRI and CT images. Experimental results clearly demonstrate that our method outperforms several state-of-the-art segmentation methods in various metrics
    • …
    corecore