23 research outputs found

    Classification of Breast Cancer Using Deep Learning and Mammogram Images

    Get PDF
    Breast cancer is the second leading cause of cancer deaths among US women. Thus, it is important for doctors to detect and diagnose breast cancer as early as possible. Mammography has been used for about 30 years, but there have been rapid developments using digital mammography technology and computer aided systems to help improve breast imaging. Deep learning techniques are being developed to provide a more effective tool for the classification of breast cancer. We adopt a transfer learning approach and fine-tune a pre-trained convolutional neural network model for accurate classification of breast masses based on screening mammograms. The model is retrained and tested using the CBIS-DDSM (Curated Breast Imaging Subset of Digital Database for Screening Mammography) dataset. We are able to achieve a training accuracy of 71.1% and a test accuracy of 68.7%

    Classification of hyper-scale multimodal imaging datasets

    Get PDF
    Algorithms that classify hyper-scale multi-modal datasets, comprising of millions of images, into constituent modality types can help researchers quickly retrieve and classify diagnostic imaging data, accelerating clinical outcomes. This research aims to demonstrate that a deep neural network that is trained on a hyper-scale dataset (4.5 million images) composed of heterogeneous multi-modal data can be used to obtain significant modality classification accuracy (96%). By combining 102 medical imaging datasets, a dataset of 4.5 million images was created. A ResNet-50, ResNet-18, and VGG16 were trained to classify these images by the imaging modality used to capture them (Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), and X-ray) across many body locations. The classification accuracy of the models was then tested on unseen data. The best performing model achieved classification accuracy of 96% on unseen data, which is on-par, or exceeds the accuracy of more complex implementations using EfficientNets or Vision Transformers (ViTs). The model achieved a balanced accuracy of 86%. This research shows it is possible to train Deep Learning (DL) Convolutional Neural Networks (CNNs) with hyper-scale multimodal datasets, composed of millions of images. Such models can find use in real-world applications with volumes of image data in the hyper-scale range, such as medical imaging repositories, or national healthcare institutions. Further research can expand this classification capability to include 3D-scans.Publisher PDFPeer reviewe

    Automatic Classification of Simulated Breast Tomosynthesis Whole Images for the Presence of Microcalcification Clusters Using Deep CNNs

    Get PDF
    Microcalcification clusters (MCs) are among the most important biomarkers for breast cancer, especially in cases of nonpalpable lesions. The vast majority of deep learning studies on digital breast tomosynthesis (DBT) are focused on detecting and classifying lesions, especially soft-tissue lesions, in small regions of interest previously selected. Only about 25% of the studies are specific to MCs, and all of them are based on the classification of small preselected regions. Classifying the whole image according to the presence or absence of MCs is a difficult task due to the size of MCs and all the information present in an entire image. A completely automatic and direct classification, which receives the entire image, without prior identification of any regions, is crucial for the usefulness of these techniques in a real clinical and screening environment. The main purpose of this work is to implement and evaluate the performance of convolutional neural networks (CNNs) regarding an automatic classification of a complete DBT image for the presence or absence of MCs (without any prior identification of regions). In this work, four popular deep CNNs are trained and compared with a new architecture proposed by us. The main task of these trainings was the classification of DBT cases by absence or presence of MCs. A public database of realistic simulated data was used, and the whole DBT image was taken into account as input. DBT data were considered without and with preprocessing (to study the impact of noise reduction and contrast enhancement methods on the evaluation of MCs with CNNs). The area under the receiver operating characteristic curve (AUC) was used to evaluate the performance. Very promising results were achieved with a maximum AUC of 94.19% for the GoogLeNet. The second-best AUC value was obtained with a new implemented network, CNN-a, with 91.17%. This CNN had the particularity of also being the fastest, thus becoming a very interesting model to be considered in other studies. With this work, encouraging outcomes were achieved in this regard, obtaining similar results to other studies for the detection of larger lesions such as masses. Moreover, given the difficulty of visualizing the MCs, which are often spread over several slices, this work may have an important impact on the clinical analysis of DBT images

    Deep Learning in Breast Cancer Imaging: A Decade of Progress and Future Directions

    Full text link
    Breast cancer has reached the highest incidence rate worldwide among all malignancies since 2020. Breast imaging plays a significant role in early diagnosis and intervention to improve the outcome of breast cancer patients. In the past decade, deep learning has shown remarkable progress in breast cancer imaging analysis, holding great promise in interpreting the rich information and complex context of breast imaging modalities. Considering the rapid improvement in the deep learning technology and the increasing severity of breast cancer, it is critical to summarize past progress and identify future challenges to be addressed. In this paper, we provide an extensive survey of deep learning-based breast cancer imaging research, covering studies on mammogram, ultrasound, magnetic resonance imaging, and digital pathology images over the past decade. The major deep learning methods, publicly available datasets, and applications on imaging-based screening, diagnosis, treatment response prediction, and prognosis are described in detail. Drawn from the findings of this survey, we present a comprehensive discussion of the challenges and potential avenues for future research in deep learning-based breast cancer imaging.Comment: Survey, 41 page

    Covid-19 detection from chest x-ray images: comparison of well-established convolutional neural networks models

    Get PDF
    Coronavirus disease 19 (Covid-19) is a pandemic disease that has already killed hundred thousands of people and infected millions more. At the climax disease Covid-19, this virus will lead to pneumonia and result in a fatality in extreme cases. COVID-19 provides radiological cues that can be easily detected using chest X-rays, which distinguishes it from other types of pneumonic disease. Recently, there are several studies using the CNN model only focused on developing binary classifier that classify between Covid-19 and normal chest X-ray. However, no previous studies have ever made a comparison between the performances of some of the established pre-trained CNN models that involving multi-classes including Covid-19, Pneumonia and Normal chest X-ray. Therefore, this study focused on formulating an automated system to detect Covid-19 from chest X-Ray images by four established and powerful CNN models AlexNet, GoogleNet, ResNet-18 and SqueezeNet and the performance of each of the models were compared. A total of 21,252 chest X-ray images from various sources were pre-processed and trained for the transfer learning-based classification task, which included Covid-19, bacterial pneumonia, viral pneumonia, and normal chest x-ray images. In conclusion, this study revealed that all models successfully classify Covid-19 and other pneumonia at an accuracy of more than 78.5%, and the test results revealed that GoogleNet outperforms other models for achieved accuracy of 91.0%, precision of 85.6%, sensitivity of 85.3%, and F1 score of 85.4%

    Spatially localized sparse approximations of deep features for breast mass characterization

    Get PDF
    We propose a deep feature-based sparse approximation classification technique for classification of breast masses into benign and malignant categories in film screen mammographs. This is a significant application as breast cancer is a leading cause of death in the modern world and improvements in diagnosis may help to decrease rates of mortality for large populations. While deep learning techniques have produced remarkable results in the field of computer-aided diagnosis of breast cancer, there are several aspects of this field that remain under-studied. In this work, we investigate the applicability of deep-feature-generated dictionaries to sparse approximation-based classification. To this end we construct dictionaries from deep features and compute sparse approximations of Regions Of Interest (ROIs) of breast masses for classification. Furthermore, we propose block and patch decomposition methods to construct overcomplete dictionaries suitable for sparse coding. The effectiveness of our deep feature spatially localized ensemble sparse analysis (DF-SLESA) technique is evaluated on a merged dataset of mass ROIs from the CBIS-DDSM and MIAS datasets. Experimental results indicate that dictionaries of deep features yield more discriminative sparse approximations of mass characteristics than dictionaries of imaging patterns and dictionaries learned by unsupervised machine learning techniques such as K-SVD. Of note is that the proposed block and patch decomposition strategies may help to simplify the sparse coding problem and to find tractable solutions. The proposed technique achieves competitive performances with state-of-the-art techniques for benign/malignant breast mass classification, using 10-fold cross-validation in merged datasets of film screen mammograms

    Deep learning model for fully automated breast cancer detection system from thermograms

    Get PDF
    Breast cancer is one of the most common diseases among women worldwide. It is considered one of the leading causes of death among women. Therefore, early detection is necessary to save lives. Thermography imaging is an effective diagnostic technique which is used for breast cancer detection with the help of infrared technology. In this paper, we propose a fully automatic breast cancer detection system. First, U-Net network is used to automatically extract and isolate the breast area from the rest of the body which behaves as noise during the breast cancer detection model. Second, we propose a two-class deep learning model, which is trained from scratch for the classification of normal and abnormal breast tissues from thermal images. Also, it is used to extract more characteristics from the dataset that is helpful in training the network and improve the efficiency of the classification process. The proposed system is evaluated using real data (A benchmark, database (DMR-IR)) and achieved accuracy = 99.33%, sensitivity = 100% and specificity = 98.67%. The proposed system is expected to be a helpful tool for physicians in clinical use

    Advanced Computational Methods for Oncological Image Analysis

    Get PDF
    [Cancer is the second most common cause of death worldwide and encompasses highly variable clinical and biological scenarios. Some of the current clinical challenges are (i) early diagnosis of the disease and (ii) precision medicine, which allows for treatments targeted to specific clinical cases. The ultimate goal is to optimize the clinical workflow by combining accurate diagnosis with the most suitable therapies. Toward this, large-scale machine learning research can define associations among clinical, imaging, and multi-omics studies, making it possible to provide reliable diagnostic and prognostic biomarkers for precision oncology. Such reliable computer-assisted methods (i.e., artificial intelligence) together with clinicians’ unique knowledge can be used to properly handle typical issues in evaluation/quantification procedures (i.e., operator dependence and time-consuming tasks). These technical advances can significantly improve result repeatability in disease diagnosis and guide toward appropriate cancer care. Indeed, the need to apply machine learning and computational intelligence techniques has steadily increased to effectively perform image processing operations—such as segmentation, co-registration, classification, and dimensionality reduction—and multi-omics data integration.

    Breast Cancer Classification from Histopathological Images Using Transfer Learning and Deep Neural Networks

    Get PDF
    Early diagnosis of breast cancer is the most reliable and practical approach to managing cancer. Computer-aided detection or computer-aided diagnosis is one of the software technology designed to assist doctors in detecting or diagnose cancer and reduce mortality via using the medical image analysis with less time. Recently, medical image analysis used Convolution Neural Networks to evaluate a vast number of data to detect cancer cells or image classification. In this thesis, we implemented transfer learning from pre-trained deep neural networks ResNet18, Inception-V3Net, and ShuffleNet in terms of binary classification and multiclass classification for breast cancer from histopathological images. We use transfer learning with the fine-tuned network results in much faster and less complicated training than a training network with randomly initialized weights from scratch. Our approach is applied to image-based breast cancer classification using histopathological images from public dataset BreakHis. The highest average accuracy achieved for binary classification of benign or malignant cases was 97.11% for ResNet 18, followed by 96.78% for ShuffleNet and 95.65% for Inception-V3Net. In terms of the multiclass classification of eight cancer classes, the average accuracies for pre-trained networks are as follows. ResNet18 achieved 94.17%, Inception-V3Net 92.76% and ShuffleNet 92.27%

    Multi-class Breast Cancer Classification Using CNN Features Hybridization

    Get PDF
    Breast cancer has become the leading cause of cancer mortality among women worldwide. The timely diagnosis of such cancer is always in demand among researchers. This research pours light on improving the design of computer-aided detection (CAD) for earlier breast cancer classification. Meanwhile, the design of CAD tools using deep learning is becoming popular and robust in biomedical classification systems. However, deep learning gives inadequate performance when used for multilabel classification problems, especially if the dataset has an uneven distribution of output targets. And this problem is prevalent in publicly available breast cancer datasets. To overcome this, the paper integrates the learning and discrimination ability of multiple convolution neural networks such as VGG16, VGG19, ResNet50, and DenseNet121 architectures for breast cancer classification. Accordingly, the approach of fusion of hybrid deep features (FHDF) is proposed to capture more potential information and attain improved classification performance. This way, the research utilizes digital mammogram images for earlier breast tumor detection. The proposed approach is evaluated on three public breast cancer datasets: mammographic image analysis society (MIAS), curated breast imaging subset of digital database for screening mammography (CBIS-DDSM), and INbreast databases. The attained results are then compared with base convolutional neural networks (CNN) architectures and the late fusion approach. For MIAS, CBIS-DDSM, and INbreast datasets, the proposed FHDF approach provides maximum performance of 98.706%, 97.734%, and 98.834% of accuracy in classifying three classes of breast cancer severities
    corecore