117 research outputs found

    Automated pectoral muscle identification on MLOâ view mammograms: Comparison of deep neural network to conventional computer vision

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/149204/1/mp13451_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/149204/2/mp13451.pd

    Minimum spanning trees and active contours for identification of the pectoral muscle in screening mamograms

    Get PDF
    Danvers, US

    Automatic tuning of a graph-based image segmentation method for digital mammography applications

    Get PDF
    Los Alamitos, C

    Cellular Automata for Medical Image Processing

    Get PDF

    Automated segmentation of radiodense tissue in digitized mammograms using a constrained Neyman-Pearson classifier

    Get PDF
    Breast cancer is the second leading cause of cancer related mortality among American women. Mammography screening has emerged as a reliable non-invasive technique for early detection of breast cancer. The radiographic appearance of the female breast consists of radiolucent (dark) regions and radiodense (light) regions due to connective and epithelial tissue. It has been established that the percentage of radiodense tissue in a patient\u27s breast can be used as a marker for predicting breast cancer risk. This thesis presents the design, development and validation of a novel automated algorithm for estimating the percentage of radiodense tissue in a digitized mammogram. The technique involves determining a dynamic threshold for segmenting radiodense indications in mammograms. Both the mammographic image and the threshold are modeled as Gaussian random variables and a constrained Neyman-Pearson criteria has been developed for segmenting radiodense tissue. Promising results have been obtained using the proposed technique. Mammograms have been obtained from an existing cohort of women enrolled in the Family Risk Analysis Program at Fox Chase Cancer Center (FCCC). The proposed technique has been validated using a set of ten images with percentages of radiodense tissue, estimated by a trained radiologist using previously established methods. This work is intended to support a concurrent study at the FCCC exploring the association between dietary patterns and breast cancer risk

    A Decision Support System (DSS) for Breast Cancer Detection Based on Invariant Feature Extraction, Classification, and Retrieval of Masses of Mammographic Images

    Get PDF
    This paper presents an integrated system for the breast cancer detection from mammograms based on automated mass detection, classification, and retrieval with a goal to support decision-making by retrieving and displaying the relevant past cases as well as predicting the images as benign or malignant. It is hypothesized that the proposed diagnostic aid would refresh the radiologist’s mental memory to guide them to a precise diagnosis with concrete visualizations instead of only suggesting a second diagnosis like many other CAD systems. Towards achieving this goal, a Graph-Based Visual Saliency (GBVS) method is used for automatic mass detection, invariant features are extracted based on using Non-Subsampled Contourlet transform (NSCT) and eigenvalues of the Hessian matrix in a histogram of oriented gradients (HOG), and finally classification and retrieval are performed based on using Support Vector Machines (SVM) and Extreme Learning Machines (ELM), and a linear combination-based similarity fusion approach. The image retrieval and classification performances are evaluated and compared in the benchmark Digital Database for Screening Mammography (DDSM) of 2604 cases by using both the precision-recall and classification accuracies. Experimental results demonstrate the effectiveness of the proposed system and show the viability of a real-time clinical application

    Comparative Analysis of Segment Anything Model and U-Net for Breast Tumor Detection in Ultrasound and Mammography Images

    Full text link
    In this study, the main objective is to develop an algorithm capable of identifying and delineating tumor regions in breast ultrasound (BUS) and mammographic images. The technique employs two advanced deep learning architectures, namely U-Net and pretrained SAM, for tumor segmentation. The U-Net model is specifically designed for medical image segmentation and leverages its deep convolutional neural network framework to extract meaningful features from input images. On the other hand, the pretrained SAM architecture incorporates a mechanism to capture spatial dependencies and generate segmentation results. Evaluation is conducted on a diverse dataset containing annotated tumor regions in BUS and mammographic images, covering both benign and malignant tumors. This dataset enables a comprehensive assessment of the algorithm's performance across different tumor types. Results demonstrate that the U-Net model outperforms the pretrained SAM architecture in accurately identifying and segmenting tumor regions in both BUS and mammographic images. The U-Net exhibits superior performance in challenging cases involving irregular shapes, indistinct boundaries, and high tumor heterogeneity. In contrast, the pretrained SAM architecture exhibits limitations in accurately identifying tumor areas, particularly for malignant tumors and objects with weak boundaries or complex shapes. These findings highlight the importance of selecting appropriate deep learning architectures tailored for medical image segmentation. The U-Net model showcases its potential as a robust and accurate tool for tumor detection, while the pretrained SAM architecture suggests the need for further improvements to enhance segmentation performance
    corecore