557 research outputs found

    Diagnostic accuracy of machine learning models on mammography in breast cancer classification:a meta-analysis

    Get PDF
    In this meta-analysis, we aimed to estimate the diagnostic accuracy of machine learning models on digital mammograms and tomosynthesis in breast cancer classification and to assess the factors affecting its diagnostic accuracy. We searched for related studies in Web of Science, Scopus, PubMed, Google Scholar and Embase. The studies were screened in two stages to exclude the unrelated studies and duplicates. Finally, 36 studies containing 68 machine learning models were included in this meta-analysis. The area under the curve (AUC), hierarchical summary receiver operating characteristics (HSROC) curve, pooled sensitivity and pooled specificity were estimated using a bivariate Reitsma model. Overall AUC, pooled sensitivity and pooled specificity were 0.90 (95% CI: 0.85–0.90), 0.83 (95% CI: 0.78–0.87) and 0.84 (95% CI: 0.81–0.87), respectively. Additionally, the three significant covariates identified in this study were country (p = 0.003), source (p = 0.002) and classifier (p = 0.016). The type of data covariate was not statistically significant (p = 0.121). Additionally, Deeks’ linear regression test indicated that there exists a publication bias in the included studies (p = 0.002). Thus, the results should be interpreted with caution

    Detecting Abnormal Axillary Lymph Nodes on Mammograms Using a Deep Convolutional Neural Network

    Full text link
    The purpose of this study was to determine the feasibility of a deep convolutional neural network (dCNN) to accurately detect abnormal axillary lymph nodes on mammograms. In this retrospective study, 107 mammographic images in mediolateral oblique projection from 74 patients were labeled to three classes: (1) "breast tissue", (2) "benign lymph nodes", and (3) "suspicious lymph nodes". Following data preprocessing, a dCNN model was trained and validated with 5385 images. Subsequently, the trained dCNN was tested on a "real-world" dataset and the performance compared to human readers. For visualization, colored probability maps of the classification were calculated using a sliding window approach. The accuracy was 98% for the training and 99% for the validation set. Confusion matrices of the "real-world" dataset for the three classes with radiological reports as ground truth yielded an accuracy of 98.51% for breast tissue, 98.63% for benign lymph nodes, and 95.96% for suspicious lymph nodes. Intraclass correlation of the dCNN and the readers was excellent (0.98), and Kappa values were nearly perfect (0.93-0.97). The colormaps successfully detected abnormal lymph nodes with excellent image quality. In this proof-of-principle study in a small patient cohort from a single institution, we found that deep convolutional networks can be trained with high accuracy and reliability to detect abnormal axillary lymph nodes on mammograms

    Comparison of the histogram of oriented gradient, GLCM, and shape feature extraction methods for breast cancer classification using SVM

    Get PDF
    Breast cancer originates from the ducts or lobules of the breast and is the second leading cause of death after cervical cancer. Therefore, early breast cancer screening is required, one of which is mammography. Mammography images can be automatically identified using Computer-Aided Diagnosis by leveraging machine learning classifications. This study analyzes the Support Vector Machine (SVM) in classifying breast cancer. It compares the performance of three features extraction methods used in SVM, namely Histogram of Oriented Gradient (HOG), GLCM, and shape feature extraction. The dataset consists of 320 mammogram image data from MIAS containing 203 normal images and 117 abnormal images. Each extraction method used three kernels, namely Linear, Gaussian, and Polynomial. The shape feature extraction-SVM using Linear kernel shows the best performance with an accuracy of 98.44 %, sensitivity of 100 %, and specificity of 97.50 %

    Computer aided diagnosis system for breast cancer using deep learning.

    Get PDF
    The recent rise of big data technology surrounding the electronic systems and developed toolkits gave birth to new promises for Artificial Intelligence (AI). With the continuous use of data-centric systems and machines in our lives, such as social media, surveys, emails, reports, etc., there is no doubt that data has gained the center of attention by scientists and motivated them to provide more decision-making and operational support systems across multiple domains. With the recent breakthroughs in artificial intelligence, the use of machine learning and deep learning models have achieved remarkable advances in computer vision, ecommerce, cybersecurity, and healthcare. Particularly, numerous applications provided efficient solutions to assist radiologists and doctors for medical imaging analysis, which has remained the essence of the visual representation that is used to construct the final observation and diagnosis. Medical research in cancerology and oncology has been recently blended with the knowledge gained from computer engineering and data science experts. In this context, an automatic assistance or commonly known as Computer-aided Diagnosis (CAD) system has become a popular area of research and development in the last decades. As a result, the CAD systems have been developed using multidisciplinary knowledge and expertise and they have been used to analyze the patient information to assist clinicians and practitioners in their decision-making process. Treating and preventing cancer remains a crucial task that radiologists and oncologists face every day to detect and investigate abnormal tumors. Therefore, a CAD system could be developed to provide decision support for many applications in the cancer patient care processes, such as lesion detection, characterization, cancer staging, tumors assessment, recurrence, and prognosis prediction. Breast cancer has been considered one of the common types of cancers in females across the world. It was also considered the leading cause of mortality among women, and it has been increased drastically every year. Early detection and diagnosis of abnormalities in screened breasts has been acknowledged as the optimal solution to examine the risk of developing breast cancer and thus reduce the increasing mortality rate. Accordingly, this dissertation proposes a new state-of-the-art CAD system for breast cancer diagnosis that is based on deep learning technology and cutting-edge computer vision techniques. Mammography screening has been recognized as the most effective tool to early detect breast lesions for reducing the mortality rate. It helps reveal abnormalities in the breast such as Mass lesion, Architectural Distortion, Microcalcification. With the number of daily patients that were screened is continuously increasing, having a second reading tool or assistance system could leverage the process of breast cancer diagnosis. Mammograms could be obtained using different modalities such as X-ray scanner and Full-Field Digital mammography (FFDM) system. The quality of the mammograms, the characteristics of the breast (i.e., density, size) or/and the tumors (i.e., location, size, shape) could affect the final diagnosis. Therefore, radiologists could miss the lesions and consequently they could generate false detection and diagnosis. Therefore, this work was motivated to improve the reading of mammograms in order to increase the accuracy of the challenging tasks. The efforts presented in this work consists of new design and implementation of neural network models for a fully integrated CAD system dedicated to breast cancer diagnosis. The approach is designed to automatically detect and identify breast lesions from the entire mammograms at a first step using fusion models’ methodology. Then, the second step only focuses on the Mass lesions and thus the proposed system should segment the detected bounding boxes of the Mass lesions to mask their background. A new neural network architecture for mass segmentation was suggested that was integrated with a new data enhancement and augmentation technique. Finally, a third stage was conducted using a stacked ensemble of neural networks for classifying and diagnosing the pathology (i.e., malignant, or benign), the Breast Imaging Reporting and Data System (BI-RADS) assessment score (i.e., from 2 to 6), or/and the shape (i.e., round, oval, lobulated, irregular) of the segmented breast lesions. Another contribution was achieved by applying the first stage of the CAD system for a retrospective analysis and comparison of the model on Prior mammograms of a private dataset. The work was conducted by joining the learning of the detection and classification model with the image-to-image mapping between Prior and Current screening views. Each step presented in the CAD system was evaluated and tested on public and private datasets and consequently the results have been fairly compared with benchmark mammography datasets. The integrated framework for the CAD system was also tested for deployment and showcase. The performance of the CAD system for the detection and identification of breast masses reached an overall accuracy of 97%. The segmentation of breast masses was evaluated together with the previous stage and the approach achieved an overall performance of 92%. Finally, the classification and diagnosis step that defines the outcome of the CAD system reached an overall pathology classification accuracy of 96%, a BIRADS categorization accuracy of 93%, and a shape classification accuracy of 90%. Results given in this dissertation indicate that our suggested integrated framework might surpass the current deep learning approaches by using all the proposed automated steps. Limitations of the proposed work could occur on the long training time of the different methods which is due to the high computation of the developed neural networks that have a huge number of the trainable parameters. Future works can include new orientations of the methodologies by combining different mammography datasets and improving the long training of deep learning models. Moreover, motivations could upgrade the CAD system by using annotated datasets to integrate more breast cancer lesions such as Calcification and Architectural distortion. The proposed framework was first developed to help detect and identify suspicious breast lesions in X-ray mammograms. Next, the work focused only on Mass lesions and segment the detected ROIs to remove the tumor’s background and highlight the contours, the texture, and the shape of the lesions. Finally, the diagnostic decision was predicted to classify the pathology of the lesions and investigate other characteristics such as the tumors’ grading assessment and type of the shape. The dissertation presented a CAD system to assist doctors and experts to identify the risk of breast cancer presence. Overall, the proposed CAD method incorporates the advances of image processing, deep learning, and image-to-image translation for a biomedical application

    Comparative Analysis of Segment Anything Model and U-Net for Breast Tumor Detection in Ultrasound and Mammography Images

    Full text link
    In this study, the main objective is to develop an algorithm capable of identifying and delineating tumor regions in breast ultrasound (BUS) and mammographic images. The technique employs two advanced deep learning architectures, namely U-Net and pretrained SAM, for tumor segmentation. The U-Net model is specifically designed for medical image segmentation and leverages its deep convolutional neural network framework to extract meaningful features from input images. On the other hand, the pretrained SAM architecture incorporates a mechanism to capture spatial dependencies and generate segmentation results. Evaluation is conducted on a diverse dataset containing annotated tumor regions in BUS and mammographic images, covering both benign and malignant tumors. This dataset enables a comprehensive assessment of the algorithm's performance across different tumor types. Results demonstrate that the U-Net model outperforms the pretrained SAM architecture in accurately identifying and segmenting tumor regions in both BUS and mammographic images. The U-Net exhibits superior performance in challenging cases involving irregular shapes, indistinct boundaries, and high tumor heterogeneity. In contrast, the pretrained SAM architecture exhibits limitations in accurately identifying tumor areas, particularly for malignant tumors and objects with weak boundaries or complex shapes. These findings highlight the importance of selecting appropriate deep learning architectures tailored for medical image segmentation. The U-Net model showcases its potential as a robust and accurate tool for tumor detection, while the pretrained SAM architecture suggests the need for further improvements to enhance segmentation performance

    Classification of Mammogram Images by Using SVM and KNN

    Get PDF
    Breast cancer is a fairly diverse illness that affects a large percentage of women in the west. A mammogram is an X-ray-based evaluation of a woman's breasts to see if she has cancer. One of the earliest prescreening diagnostic procedures for breast cancer is mammography. It is well known that breast cancer recovery rates are significantly increased by early identification. Mammogram analysis is typically delegated to skilled radiologists at medical facilities. Human mistake, however, is always a possibility. Fatigue of the observer can commonly lead to errors, resulting in intraobserver and interobserver variances. The image quality affects the sensitivity of mammographic screening as well. The goal of developing automated techniques for detection and grading of breast cancer images is to reduce various types of variability and standardize diagnostic procedures. The classification of breast cancer images into benign (tumor increasing, but not harmful) and malignant (cannot be managed, it causes death) classes using a two-way classification algorithm is shown in this study. The two-way classification data mining algorithms are utilized because there are not many abnormal mammograms. The first classification algorithm, k-means, divides a given dataset into a predetermined number of clusters. Support Vector Machine (SVM), a second classification algorithm, is used to identify the optimal classification function to separate members of the two classes in the training dat

    Breast cancer diagnosis: a survey of pre-processing, segmentation, feature extraction and classification

    Get PDF
    Machine learning methods have been an interesting method in the field of medical for many years, and they have achieved successful results in various fields of medical science. This paper examines the effects of using machine learning algorithms in the diagnosis and classification of breast cancer from mammography imaging data. Cancer diagnosis is the identification of images as cancer or non-cancer, and this involves image preprocessing, feature extraction, classification, and performance analysis. This article studied 93 different references mentioned in the previous years in the field of processing and tries to find an effective way to diagnose and classify breast cancer. Based on the results of this research, it can be concluded that most of today’s successful methods focus on the use of deep learning methods. Finding a new method requires an overview of existing methods in the field of deep learning methods in order to make a comparison and case study

    Mass Classification of Breast Cancer Using CNN and Faster R-CNN Model Comparison

    Get PDF
    Threat of breast cancer is a frightening type and threatens the female population worldwide. Early detection is preventive solution to determine cancer diagnosis or tumors in the female breast area. Today, machine learning technology in managing medical images has become an innovative trend in the health sector. This technology can accelerate diagnosing disease based on the acquisition of accuracy values. The primary purpose of this research is to innovate by comparing two deep learning models to build a prediction system for early-stage breast cancer. This research utilizes Convolutional Neural Network (CNN) sequential models and Faster Region-based Convolutional Neural Network (R-CNN) models that can determine the classification of normal or abnormal breast image data, which can determine the normal or abnormal classification of breast image. The dataset's source in this study came from the Mammographic Image Analysis Society (MIAS). This dataset consists of 322 mammogram data with 123 abnormal and 199 normal classes. The experimental results of this study show that the accuracy of the CNN and R-CNN models in image classification are 91.26% and 63.89%, respectively. Based on these results, the CNN sequential model has better accuracy than the Faster R-CNN model, because it does not require unique characteristics to detect breast cancer
    • …
    corecore