13 research outputs found

    Modified fuzzy rough set technique with stacked autoencoder model for magnetic resonance imaging based breast cancer detection

    Get PDF
    Breast cancer is the common cancer in women, where early detection reduces the mortality rate. The magnetic resonance imaging (MRI) images are efficient in analyzing breast cancer, but it is hard to identify the abnormalities. The manual breast cancer detection in MRI images is inefficient; therefore, a deep learning-based system is implemented in this manuscript. Initially, the visual quality improvement is done using region growing and adaptive histogram equalization (AHE), and then, the breast lesion is segmented by Otsu thresholding with morphological transform. Next, the features are extracted from the segmented lesion, and a modified fuzzy rough set technique is proposed to reduce the dimensions of the extracted features that decreases the system complexity and computational time. The active features are fed to the stacked autoencoder for classifying the benign and malignant classes. The results demonstrated that the proposed model attained 99% and 99.22% of classification accuracy on the benchmark datasets, which are higher related to the comparative classifiers: decision tree, naïve Bayes, random forest and k-nearest neighbor (KNN). The obtained results state that the proposed model superiorly screens and detects the breast lesions that assists clinicians in effective therapeutic intervention and timely treatment

    Local edge computing for radiological image reconstruction and computer-assisted detection: A feasibility study

    Get PDF
    Computational requirements for data processing at different stages of the radiology value chain are increasing. Cone beam computed tomography (CBCT) is a diagnostic imaging technique used in dental and extremity imaging, involving a highly demanding image reconstruction task. In turn, artificial intelligence (AI) assisted diagnostics are becoming increasingly popular, thus increasing the use of computation resources. Furthermore, the need for fully independent imaging units outside radiology departments and with remotely performed diagnostics emphasize the need for wireless connectivity between the imaging unit and hospital infrastructure. In this feasibility study, we propose an approach based on a distributed edge-cloud computing platform, consisting of small-scale local edge nodes, edge servers with traditional cloud resources to perform data processing tasks in radiology. We are interested in the use of local computing resources with Graphics Processing Units (GPUs), in our case Jetson Xavier NX, for hosting the algorithms for two use-cases, namely image reconstruction in cone beam computed tomography and AI-assisted cancer detection from mammographic images. Particularly, we wanted to determine the technical requirements for local edge computing platform for these two tasks and whether CBCT image reconstruction and breast cancer detection tasks are possible in a diagnostically acceptable time frame. We validated the use-cases and the proposed edge computing platform in two stages. First, the algorithms were validated use-case-wise by comparing the computing performance of the edge nodes against a reference setup (regular workstation). Second, we performed qualitative evaluation on the edge computing platform by running the algorithms as nanoservices. Our results, obtained through real-life prototyping, indicate that it is possible and technically feasible to run both reconstruction and AI-assisted image analysis functions in a diagnostically acceptable computing time. Furthermore, based on the qualitative evaluation, we confirmed that the local edge computing capacity can be scaled up and down during runtime by adding or removing edge devices without the need for manual reconfigurations. We also found all previously implemented software components to be transferable as such. Overall, the results are promising and help in developing future applications, e.g., in mobile imaging scenarios, where such a platform is beneficial

    Are better AI algorithms for breast cancer detection also better at predicting risk? A paired case–control study

    Get PDF
    Background: There is increasing evidence that artificial intelligence (AI) breast cancer risk evaluation tools using digital mammograms are highly informative for 1–6 years following a negative screening examination. We hypothesized that algorithms that have previously been shown to work well for cancer detection will also work well for risk assessment and that performance of algorithms for detection and risk assessment is correlated. Methods: To evaluate our hypothesis, we designed a case-control study using paired mammograms at diagnosis and at the previous screening visit. The study included n = 3386 women from the OPTIMAM registry, that includes mammograms from women diagnosed with breast cancer in the English breast screening program 2010–2019. Cases were diagnosed with invasive breast cancer or ductal carcinoma in situ at screening and were selected if they had a mammogram available at the screening examination that led to detection, and a paired mammogram at their previous screening visit 3y prior to detection when no cancer was detected. Controls without cancer were matched 1:1 to cases based on age (year), screening site, and mammography machine type. Risk assessment was conducted using a deep-learning model designed for breast cancer risk assessment (Mirai), and three open-source deep-learning algorithms designed for breast cancer detection. Discrimination was assessed using a matched area under the curve (AUC) statistic. Results: Overall performance using the paired mammograms followed the same order by algorithm for risk assessment (AUC range 0.59–0.67) and detection (AUC 0.81–0.89), with Mirai performing best for both. There was also a correlation in performance for risk and detection within algorithms by cancer size, with much greater accuracy for large cancers (30 mm+, detection AUC: 0.88–0.92; risk AUC: 0.64–0.74) than smaller cancers (0 to < 10 mm, detection AUC: 0.73–0.86, risk AUC: 0.54–0.64). Mirai was relatively strong for risk assessment of smaller cancers (0 to < 10 mm, risk, Mirai AUC: 0.64 (95% CI 0.57 to 0.70); other algorithms AUC 0.54–0.56). Conclusions: Improvements in risk assessment could stem from enhancing cancer detection capabilities of smaller cancers. Other state-of-the-art AI detection algorithms with high performance for smaller cancers might achieve relatively high performance for risk assessment

    Weakly-supervised High-resolution Segmentation of Mammography Images for Breast Cancer Diagnosis

    Full text link
    In the last few years, deep learning classifiers have shown promising results in image-based medical diagnosis. However, interpreting the outputs of these models remains a challenge. In cancer diagnosis, interpretability can be achieved by localizing the region of the input image responsible for the output, i.e. the location of a lesion. Alternatively, segmentation or detection models can be trained with pixel-wise annotations indicating the locations of malignant lesions. Unfortunately, acquiring such labels is labor-intensive and requires medical expertise. To overcome this difficulty, weakly-supervised localization can be utilized. These methods allow neural network classifiers to output saliency maps highlighting the regions of the input most relevant to the classification task (e.g. malignant lesions in mammograms) using only image-level labels (e.g. whether the patient has cancer or not) during training. When applied to high-resolution images, existing methods produce low-resolution saliency maps. This is problematic in applications in which suspicious lesions are small in relation to the image size. In this work, we introduce a novel neural network architecture to perform weakly-supervised segmentation of high-resolution images. The proposed model selects regions of interest via coarse-level localization, and then performs fine-grained segmentation of those regions. We apply this model to breast cancer diagnosis with screening mammography, and validate it on a large clinically-realistic dataset. Measured by Dice similarity score, our approach outperforms existing methods by a large margin in terms of localization performance of benign and malignant lesions, relatively improving the performance by 39.6% and 20.0%, respectively. Code and the weights of some of the models are available at https://github.com/nyukat/GLAMComment: The last two authors contributed equally. Accepted to Medical Imaging with Deep Learning (MIDL) 202

    Spatially localized sparse approximations of deep features for breast mass characterization

    Get PDF
    We propose a deep feature-based sparse approximation classification technique for classification of breast masses into benign and malignant categories in film screen mammographs. This is a significant application as breast cancer is a leading cause of death in the modern world and improvements in diagnosis may help to decrease rates of mortality for large populations. While deep learning techniques have produced remarkable results in the field of computer-aided diagnosis of breast cancer, there are several aspects of this field that remain under-studied. In this work, we investigate the applicability of deep-feature-generated dictionaries to sparse approximation-based classification. To this end we construct dictionaries from deep features and compute sparse approximations of Regions Of Interest (ROIs) of breast masses for classification. Furthermore, we propose block and patch decomposition methods to construct overcomplete dictionaries suitable for sparse coding. The effectiveness of our deep feature spatially localized ensemble sparse analysis (DF-SLESA) technique is evaluated on a merged dataset of mass ROIs from the CBIS-DDSM and MIAS datasets. Experimental results indicate that dictionaries of deep features yield more discriminative sparse approximations of mass characteristics than dictionaries of imaging patterns and dictionaries learned by unsupervised machine learning techniques such as K-SVD. Of note is that the proposed block and patch decomposition strategies may help to simplify the sparse coding problem and to find tractable solutions. The proposed technique achieves competitive performances with state-of-the-art techniques for benign/malignant breast mass classification, using 10-fold cross-validation in merged datasets of film screen mammograms
    corecore