In this study, the main objective is to develop an algorithm capable of
identifying and delineating tumor regions in breast ultrasound (BUS) and
mammographic images. The technique employs two advanced deep learning
architectures, namely U-Net and pretrained SAM, for tumor segmentation. The
U-Net model is specifically designed for medical image segmentation and
leverages its deep convolutional neural network framework to extract meaningful
features from input images. On the other hand, the pretrained SAM architecture
incorporates a mechanism to capture spatial dependencies and generate
segmentation results. Evaluation is conducted on a diverse dataset containing
annotated tumor regions in BUS and mammographic images, covering both benign
and malignant tumors. This dataset enables a comprehensive assessment of the
algorithm's performance across different tumor types. Results demonstrate that
the U-Net model outperforms the pretrained SAM architecture in accurately
identifying and segmenting tumor regions in both BUS and mammographic images.
The U-Net exhibits superior performance in challenging cases involving
irregular shapes, indistinct boundaries, and high tumor heterogeneity. In
contrast, the pretrained SAM architecture exhibits limitations in accurately
identifying tumor areas, particularly for malignant tumors and objects with
weak boundaries or complex shapes. These findings highlight the importance of
selecting appropriate deep learning architectures tailored for medical image
segmentation. The U-Net model showcases its potential as a robust and accurate
tool for tumor detection, while the pretrained SAM architecture suggests the
need for further improvements to enhance segmentation performance