23 research outputs found

    Breast Ultrasound Image Segmentation Based on Uncertainty Reduction and Context Information

    Get PDF
    Breast cancer frequently occurs in women over the world. It was one of the most serious diseases and the second common cancer among women in 2019. The survival rate of stages 0 and 1 of breast cancer is closed to 100%. It is urgent to develop an approach that can detect breast cancer in the early stages. Breast ultrasound (BUS) imaging is low-cost, portable, and effective; therefore, it becomes the most crucial approach for breast cancer diagnosis. However, BUS images are of poor quality, low contrast, and uncertain. The computer-aided diagnosis (CAD) system is developed for breast cancer to prevent misdiagnosis. There have been many types of research for BUS image segmentation based on classic machine learning and computer vision methods, e.g., clustering methods, thresholding methods, level set, active contour, and graph cut. Since deep neural networks have been widely utilized in nature image semantic segmentation and achieved good results, deep learning approaches are also applied to BUS image segmentation. However, the previous methods still suffer some shortcomings. Firstly, the previous non-deep learning approaches highly depend on the manually selected features, such as texture, frequency, and intensity. Secondly, the previous deep learning approaches do not solve the uncertainty and noise in BUS images and deep learning architectures. Meanwhile, the previous methods also do not involve context information such as medical knowledge about breast cancer. In this work, three approaches are proposed to measure and reduce uncertainty and noise in deep neural networks. Also, three approaches are designed to involve medical knowledge and long-range distance context information in machine learning algorithms. The proposed methods are applied to breast ultrasound image segmentation. In the first part, three fuzzy uncertainty reduction architectures are designed to measure the uncertainty degree for pixels and channels in the convolutional feature maps. Then, medical knowledge constrained conditional random fields are proposed to reflect the breast layer structure and refine the segmentation results. A novel shape-adaptive convolutional operator is proposed to provide long-distance context information in the convolutional layer. Finally, a fuzzy generative adversarial network is proposed to reduce uncertainty. The new approaches are applied to 4 breast ultrasound image datasets: one multi-category dataset and three public datasets with pixel-wise ground truths for tumor and background. The proposed methods achieve the best performance among 15 BUS image segmentation methods on the four datasets

    Deep Learning for computer-aided detection and diagnosis of clustered microcalcifications on digital mammograms

    Get PDF

    Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives

    Full text link
    Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.Comment: Under Revie

    Generative Adversarial Network (GAN) for Medical Image Synthesis and Augmentation

    Get PDF
    Medical image processing aided by artificial intelligence (AI) and machine learning (ML) significantly improves medical diagnosis and decision making. However, the difficulty to access well-annotated medical images becomes one of the main constraints on further improving this technology. Generative adversarial network (GAN) is a DNN framework for data synthetization, which provides a practical solution for medical image augmentation and translation. In this study, we first perform a quantitative survey on the published studies on GAN for medical image processing since 2017. Then a novel adaptive cycle-consistent adversarial network (Ad CycleGAN) is proposed. We respectively use a malaria blood cell dataset (19,578 images) and a COVID-19 chest X-ray dataset (2,347 images) to test the new Ad CycleGAN. The quantitative metrics include mean squared error (MSE), root mean squared error (RMSE), peak signal-to-noise ratio (PSNR), universal image quality index (UIQI), spatial correlation coefficient (SCC), spectral angle mapper (SAM), visual information fidelity (VIF), Frechet inception distance (FID), and the classification accuracy of the synthetic images. The CycleGAN and variant autoencoder (VAE) are also implemented and evaluated as comparison. The experiment results on malaria blood cell images indicate that the Ad CycleGAN generates more valid images compared to CycleGAN or VAE. The synthetic images by Ad CycleGAN or CycleGAN have better quality than those by VAE. The synthetic images by Ad CycleGAN have the highest accuracy of 99.61%. In the experiment on COVID-19 chest X-ray, the synthetic images by Ad CycleGAN or CycleGAN have higher quality than those generated by variant autoencoder (VAE). However, the synthetic images generated through the homogenous image augmentation process have better quality than those synthesized through the image translation process. The synthetic images by Ad CycleGAN have higher accuracy of 95.31% compared to the accuracy of the images by CycleGAN of 93.75%. In conclusion, the proposed Ad CycleGAN provides a new path to synthesize medical images with desired diagnostic or pathological patterns. It is considered a new approach of conditional GAN with effective control power upon the synthetic image domain. The findings offer a new path to improve the deep neural network performance in medical image processing
    corecore