1,931 research outputs found

    Multi-modality Medical Image Segmentation with Unsupervised Domain Adaptation

    Get PDF
    Advances in medical imaging have greatly aided in providing accurate and fast medical diagnosis, followed by recent deep learning developments enabling the efficient and cost-effective analysis of medical images. Among different image processing tasks, medical segmentation is one of the most crucial aspects because it provides the class, location, size, and shape of the subject of interest, which is invaluable and essential for diagnostics. Nevertheless, acquiring annotations for training data usually requires expensive manpower and specialised expertise, making supervised training difficult. To overcome these problems, unsupervised domain adaptation (UDA) has been adopted to bridge knowledge between different domains. Despite the appearance dissimilarities of different modalities such as MRI and CT, researchers have concluded that structural features of the same anatomy are universal across modalities, which unfolded the study of multi-modality image segmentation with UDA methods. The traditional UDA research tackled the domain shift problem by minimising the distance of the source and target distributions in latent spaces with the help of advanced mathematics. However, with the recent development of the generative adversarial network (GAN), the adversarial UDA methods have shown outstanding performance by producing synthetic images to mitigate the domain gap in training a segmentation network for the target domain. Most existing studies focus on modifying the network architecture, but few investigate the generative adversarial training strategy. Inspired by the recent success of state-of-the-art data augmentation techniques in classification tasks, we designed a novel mix-up strategy to assist GAN training for the better synthesis of structural details, consequently leading to better segmentation results. In this thesis, we propose SynthMix, an add-on module with a natural yet effective training policy that can promote synthetic quality without altering the network architecture. SynthMix is a mix-up synthesis scheme designed for integration with the adversarial logic of GAN networks. Traditional GAN approaches judge an image as a whole which could be easily dominated by discriminative features, resulting in little improvement of delicate structures in synthesis. In contrast, SynthMix uses the data augmentation technique to reinforce detail transformation at local regions. Specifically, it coherently mixes up aligned images of real and synthetic samples at local regions to stimulate the generation of fine-grained features examined by an associated inspector for domain-specific details. We evaluated our method on two segmentation benchmarks among three publicly available datasets. Our method showed a significant performance gain compared with existing state-of-the-art approaches

    Data efficient deep learning for medical image analysis: A survey

    Full text link
    The rapid evolution of deep learning has significantly advanced the field of medical image analysis. However, despite these achievements, the further enhancement of deep learning models for medical image analysis faces a significant challenge due to the scarcity of large, well-annotated datasets. To address this issue, recent years have witnessed a growing emphasis on the development of data-efficient deep learning methods. This paper conducts a thorough review of data-efficient deep learning methods for medical image analysis. To this end, we categorize these methods based on the level of supervision they rely on, encompassing categories such as no supervision, inexact supervision, incomplete supervision, inaccurate supervision, and only limited supervision. We further divide these categories into finer subcategories. For example, we categorize inexact supervision into multiple instance learning and learning with weak annotations. Similarly, we categorize incomplete supervision into semi-supervised learning, active learning, and domain-adaptive learning and so on. Furthermore, we systematically summarize commonly used datasets for data efficient deep learning in medical image analysis and investigate future research directions to conclude this survey.Comment: Under Revie

    PnP-AdaNet: Plug-and-Play Adversarial Domain Adaptation Network with a Benchmark at Cross-modality Cardiac Segmentation

    Get PDF
    Deep convolutional networks have demonstrated the state-of-the-art performance on various medical image computing tasks. Leveraging images from different modalities for the same analysis task holds clinical benefits. However, the generalization capability of deep models on test data with different distributions remain as a major challenge. In this paper, we propose the PnPAdaNet (plug-and-play adversarial domain adaptation network) for adapting segmentation networks between different modalities of medical images, e.g., MRI and CT. We propose to tackle the significant domain shift by aligning the feature spaces of source and target domains in an unsupervised manner. Specifically, a domain adaptation module flexibly replaces the early encoder layers of the source network, and the higher layers are shared between domains. With adversarial learning, we build two discriminators whose inputs are respectively multi-level features and predicted segmentation masks. We have validated our domain adaptation method on cardiac structure segmentation in unpaired MRI and CT. The experimental results with comprehensive ablation studies demonstrate the excellent efficacy of our proposed PnP-AdaNet. Moreover, we introduce a novel benchmark on the cardiac dataset for the task of unsupervised cross-modality domain adaptation. We will make our code and database publicly available, aiming to promote future studies on this challenging yet important research topic in medical imaging
    • …
    corecore