11 research outputs found
Breast tumor segmentation and shape classification in mammograms using generative adversarial and convolutional neural network.
Mammogram inspection in search of breast tumors is a tough assignment that radiologists must carry out frequently. Therefore, image analysis methods are needed for the detection and delineation of breast tumors, which portray crucial morphological information that will support reliable diagnosis. In this paper, we proposed a conditional Generative Adversarial Network (cGAN) devised to segment a breast tumor within a region of interest (ROI) in a mammogram. The generative network learns to recognize the tumor area and to create the binary mask that outlines it. In turn, the adversarial network learns to distinguish between real (ground truth) and synthetic segmentations, thus enforcing the generative network to create binary masks as realistic as possible. The cGAN works well even when the number of training samples are limited. As a consequence, the proposed method outperforms several state-of-the-art approaches. Our working hypothesis is corroborated by diverse segmentation experiments performed on INbreast and a private in-house dataset. The proposed segmentation model, working on an image crop containing the tumor as well as a significant surrounding area of healthy tissue (loose frame ROI), provides a high Dice coefficient and Intersection over Union (IoU) of 94% and 87%, respectively. In addition, a shape descriptor based on a Convolutional Neural Network (CNN) is proposed to classify the generated masks into four tumor shapes: irregular, lobular, oval and round. The proposed shape descriptor was trained on DDSM, since it provides shape ground truth (while the other two datasets does not), yielding an overall accuracy of 80%, which outperforms the current state-of-the-art
Comparative Analysis of Segment Anything Model and U-Net for Breast Tumor Detection in Ultrasound and Mammography Images
In this study, the main objective is to develop an algorithm capable of
identifying and delineating tumor regions in breast ultrasound (BUS) and
mammographic images. The technique employs two advanced deep learning
architectures, namely U-Net and pretrained SAM, for tumor segmentation. The
U-Net model is specifically designed for medical image segmentation and
leverages its deep convolutional neural network framework to extract meaningful
features from input images. On the other hand, the pretrained SAM architecture
incorporates a mechanism to capture spatial dependencies and generate
segmentation results. Evaluation is conducted on a diverse dataset containing
annotated tumor regions in BUS and mammographic images, covering both benign
and malignant tumors. This dataset enables a comprehensive assessment of the
algorithm's performance across different tumor types. Results demonstrate that
the U-Net model outperforms the pretrained SAM architecture in accurately
identifying and segmenting tumor regions in both BUS and mammographic images.
The U-Net exhibits superior performance in challenging cases involving
irregular shapes, indistinct boundaries, and high tumor heterogeneity. In
contrast, the pretrained SAM architecture exhibits limitations in accurately
identifying tumor areas, particularly for malignant tumors and objects with
weak boundaries or complex shapes. These findings highlight the importance of
selecting appropriate deep learning architectures tailored for medical image
segmentation. The U-Net model showcases its potential as a robust and accurate
tool for tumor detection, while the pretrained SAM architecture suggests the
need for further improvements to enhance segmentation performance
Deep Learning in Breast Cancer Imaging: A Decade of Progress and Future Directions
Breast cancer has reached the highest incidence rate worldwide among all
malignancies since 2020. Breast imaging plays a significant role in early
diagnosis and intervention to improve the outcome of breast cancer patients. In
the past decade, deep learning has shown remarkable progress in breast cancer
imaging analysis, holding great promise in interpreting the rich information
and complex context of breast imaging modalities. Considering the rapid
improvement in the deep learning technology and the increasing severity of
breast cancer, it is critical to summarize past progress and identify future
challenges to be addressed. In this paper, we provide an extensive survey of
deep learning-based breast cancer imaging research, covering studies on
mammogram, ultrasound, magnetic resonance imaging, and digital pathology images
over the past decade. The major deep learning methods, publicly available
datasets, and applications on imaging-based screening, diagnosis, treatment
response prediction, and prognosis are described in detail. Drawn from the
findings of this survey, we present a comprehensive discussion of the
challenges and potential avenues for future research in deep learning-based
breast cancer imaging.Comment: Survey, 41 page
Automatic Detection of Thyroid Nodule Characteristics From 2D Ultrasound Images
Thyroid cancer is one of the common types of cancer worldwide, and Ultrasound (US) imaging is a modality normally used for thyroid cancer diagnostics. The American College of Radiology Thyroid Imaging Reporting and Data System (ACR
TIRADS) has been widely adopted to identify and classify US image characteristics for thyroid nodules. This paper presents novel methods for detecting the characteristic descriptors derived from TIRADS. Our methods return descriptions of the nodule margin irregularity, margin smoothness, calcification as well as shape and echogenicity using conventional computer vision and deep learning techniques. We evaluate our methods using datasets of 471 US images of thyroid nodules acquired from US machines of different makes and labeled by multiple radiologists. The proposed methods achieved overall accuracies of 88.00%, 93.18%, and 89.13% in classifying nodule calcification, margin irregularity, and margin smoothness respectively.
Further tests with limited data also show a promising overall accuracy of 90.60% for echogenicity and 100.00% for nodule
shape. This study provides an automated annotation of thyroid nodule characteristics from 2D ultrasound images. The
experimental results showed promising performance of our methods for thyroid nodule analysis. The automatic detection of correct characteristics not only offers supporting evidence for diagnosis, but also generates patient reports rapidly, thereby decreasing the workload of radiologists and enhancing productivity
Deep Learning in Medical Image Analysis
The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis
Going Deep in Medical Image Analysis: Concepts, Methods, Challenges and Future Directions
Medical Image Analysis is currently experiencing a paradigm shift due to Deep
Learning. This technology has recently attracted so much interest of the
Medical Imaging community that it led to a specialized conference in `Medical
Imaging with Deep Learning' in the year 2018. This article surveys the recent
developments in this direction, and provides a critical review of the related
major aspects. We organize the reviewed literature according to the underlying
Pattern Recognition tasks, and further sub-categorize it following a taxonomy
based on human anatomy. This article does not assume prior knowledge of Deep
Learning and makes a significant contribution in explaining the core Deep
Learning concepts to the non-experts in the Medical community. Unique to this
study is the Computer Vision/Machine Learning perspective taken on the advances
of Deep Learning in Medical Imaging. This enables us to single out `lack of
appropriately annotated large-scale datasets' as the core challenge (among
other challenges) in this research direction. We draw on the insights from the
sister research fields of Computer Vision, Pattern Recognition and Machine
Learning etc.; where the techniques of dealing with such challenges have
already matured, to provide promising directions for the Medical Imaging
community to fully harness Deep Learning in the future
On Improving Generalization of CNN-Based Image Classification with Delineation Maps Using the CORF Push-Pull Inhibition Operator
Deployed image classification pipelines are typically dependent on the images captured in real-world environments. This means that images might be affected by different sources of perturbations (e.g. sensor noise in low-light environments). The main challenge arises by the fact that image quality directly impacts the reliability and consistency of classification tasks. This challenge has, hence, attracted wide interest within the computer vision communities. We propose a transformation step that attempts to enhance the generalization ability of CNN models in the presence of unseen noise in the test set. Concretely, the delineation maps of given images are determined using the CORF push-pull inhibition operator. Such an operation transforms an input image into a space that is more robust to noise before being processed by a CNN. We evaluated our approach on the Fashion MNIST data set with an AlexNet model. It turned out that the proposed CORF-augmented pipeline achieved comparable results on noise-free images to those of a conventional AlexNet classification model without CORF delineation maps, but it consistently achieved significantly superior performance on test images perturbed with different levels of Gaussian and uniform noise
2020 Student Symposium Research and Creative Activity Book of Abstracts
The UMaine Student Symposium (UMSS) is an annual event that celebrates undergraduate and graduate student research and creative work. Students from a variety of disciplines present their achievements with video presentations. It’s the ideal occasion for the community to see how UMaine students’ work impacts locally – and beyond.
The 2020 Student Symposium Research and Creative Activity Book of Abstracts includes a complete list of student presenters as well as abstracts related to their works