326 research outputs found

    End-to-End Adversarial Shape Learning for Abdomen Organ Deep Segmentation

    Full text link
    Automatic segmentation of abdomen organs using medical imaging has many potential applications in clinical workflows. Recently, the state-of-the-art performance for organ segmentation has been achieved by deep learning models, i.e., convolutional neural network (CNN). However, it is challenging to train the conventional CNN-based segmentation models that aware of the shape and topology of organs. In this work, we tackle this problem by introducing a novel end-to-end shape learning architecture -- organ point-network. It takes deep learning features as inputs and generates organ shape representations as points that located on organ surface. We later present a novel adversarial shape learning objective function to optimize the point-network to capture shape information better. We train the point-network together with a CNN-based segmentation model in a multi-task fashion so that the shared network parameters can benefit from both shape learning and segmentation tasks. We demonstrate our method with three challenging abdomen organs including liver, spleen, and pancreas. The point-network generates surface points with fine-grained details and it is found critical for improving organ segmentation. Consequently, the deep segmentation model is improved by the introduced shape learning as significantly better Dice scores are observed for spleen and pancreas segmentation.Comment: Accepted to International Workshop on Machine Learning in Medical Imaging (MLMI2019

    Multi-modality Medical Image Segmentation with Unsupervised Domain Adaptation

    Get PDF
    Advances in medical imaging have greatly aided in providing accurate and fast medical diagnosis, followed by recent deep learning developments enabling the efficient and cost-effective analysis of medical images. Among different image processing tasks, medical segmentation is one of the most crucial aspects because it provides the class, location, size, and shape of the subject of interest, which is invaluable and essential for diagnostics. Nevertheless, acquiring annotations for training data usually requires expensive manpower and specialised expertise, making supervised training difficult. To overcome these problems, unsupervised domain adaptation (UDA) has been adopted to bridge knowledge between different domains. Despite the appearance dissimilarities of different modalities such as MRI and CT, researchers have concluded that structural features of the same anatomy are universal across modalities, which unfolded the study of multi-modality image segmentation with UDA methods. The traditional UDA research tackled the domain shift problem by minimising the distance of the source and target distributions in latent spaces with the help of advanced mathematics. However, with the recent development of the generative adversarial network (GAN), the adversarial UDA methods have shown outstanding performance by producing synthetic images to mitigate the domain gap in training a segmentation network for the target domain. Most existing studies focus on modifying the network architecture, but few investigate the generative adversarial training strategy. Inspired by the recent success of state-of-the-art data augmentation techniques in classification tasks, we designed a novel mix-up strategy to assist GAN training for the better synthesis of structural details, consequently leading to better segmentation results. In this thesis, we propose SynthMix, an add-on module with a natural yet effective training policy that can promote synthetic quality without altering the network architecture. SynthMix is a mix-up synthesis scheme designed for integration with the adversarial logic of GAN networks. Traditional GAN approaches judge an image as a whole which could be easily dominated by discriminative features, resulting in little improvement of delicate structures in synthesis. In contrast, SynthMix uses the data augmentation technique to reinforce detail transformation at local regions. Specifically, it coherently mixes up aligned images of real and synthetic samples at local regions to stimulate the generation of fine-grained features examined by an associated inspector for domain-specific details. We evaluated our method on two segmentation benchmarks among three publicly available datasets. Our method showed a significant performance gain compared with existing state-of-the-art approaches

    Going Deep in Medical Image Analysis: Concepts, Methods, Challenges and Future Directions

    Full text link
    Medical Image Analysis is currently experiencing a paradigm shift due to Deep Learning. This technology has recently attracted so much interest of the Medical Imaging community that it led to a specialized conference in `Medical Imaging with Deep Learning' in the year 2018. This article surveys the recent developments in this direction, and provides a critical review of the related major aspects. We organize the reviewed literature according to the underlying Pattern Recognition tasks, and further sub-categorize it following a taxonomy based on human anatomy. This article does not assume prior knowledge of Deep Learning and makes a significant contribution in explaining the core Deep Learning concepts to the non-experts in the Medical community. Unique to this study is the Computer Vision/Machine Learning perspective taken on the advances of Deep Learning in Medical Imaging. This enables us to single out `lack of appropriately annotated large-scale datasets' as the core challenge (among other challenges) in this research direction. We draw on the insights from the sister research fields of Computer Vision, Pattern Recognition and Machine Learning etc.; where the techniques of dealing with such challenges have already matured, to provide promising directions for the Medical Imaging community to fully harness Deep Learning in the future
    • …
    corecore