11 research outputs found

    Automated Detection and Classification of Breast Cancer Nuclei with Deep Convolutional Neural Network

    Get PDF
    Heterogeneous regions present in tissue with respect to cancer cells are of various types. This study aimed to analyze and classify the morphological features of the nucleus and cytoplasm regions of tumor cells. This tissue morphology study was established through invasive ductal breast cancer histopathology images accessed from the Databiox public dataset. Automatic detection and classification was carried out by means of the computer analytical tool of deep learning algorithm. Residual blocks with short skip were employed with hidden layers of preserved spatial information. A ResNet-based convolutional neural network was adapted to perform end-to-end segmentation of breast cancer nuclei. Nuclei regions were identified through color and tubular structure morphological features. Based on the segmented and extracted images, classification of benign and malignant breast cancer cells was done to identify tumors. The results indicated that the proposed method could successfully segment and classify breast tumors with an average Dice score of 90.68%, sensitivity = 98.64, specificity = 98.68, and accuracy = 98.82

    A Food Recommender Based on Frequent Sets of Food Mining Using Image Recognition

    Get PDF
    Food recommendation is an important service in our life. To set a system, we searched a set of food images from social network which were shared or reviewed on the web, including the information that people actually chose in daily life. In the field of representation learning, we proposed a scalable architecture for integrating different deep neural networks (DNNs) with a reliability score of DNN. This allowed the integrated DNN to select a suitable recognition result obtained from the different DNNs that were independently constructed. The frequent set of foods extracted from food images was applied to Apriori data mining algorithm for the food recommendation process. In this study, we evaluated the feasibility of our proposed method

    A Medical Analysis for Colorectal Lymphomas using 3D MRI Images and Deep Residual Boltzmann CNN Mechanism

    Get PDF
    In this technological world the healthcare is very crucial and difficult to spend time for the wellbeing. The lifestyle disease can transform in to the life threating disease and lead to critical stages. Colorectal lymphomas are the 3rd most malignancy death in the entire world. The estimation of the volume of lymphomas is often used by Magnetic Resonance Imaging during medical diagnosis, particularly in advanced stages. The research study can be classified in multiple stages. In the initial stages, an automated method is used to calculated the volume of the colorectal lymphomas using 3D MRI images. The process begins with feature extraction using Iterative Multilinear Component Analysis and Multiscale Phase level set segmentation based on CNN model. Then, a logical frustum model is utilized for 3D simulation of colon lymphoma for rendering the medical data. The next stages is focused on tackling the matter of segmentation and classification of abnormality and normality of lymph nodes. A semi supervised fuzzy logic algorithm for clustering is used for segmentation, whereas bee herd optimization algorithm with scale down for employed to intensify corresponding classifier rate of detection. Finally, classification is performed using Deep residual Boltzmann CNN. Our proposed methodology gives a better results and diagnosis prediction for lymphomas for an accuracy 97.7%, sensitivity 95.7% and specify as 95.8% which is superior than the traditional approach

    Explainable deep learning models in medical image analysis

    Full text link
    Deep learning methods have been very effective for a variety of medical diagnostic tasks and has even beaten human experts on some of those. However, the black-box nature of the algorithms has restricted clinical use. Recent explainability studies aim to show the features that influence the decision of a model the most. The majority of literature reviews of this area have focused on taxonomy, ethics, and the need for explanations. A review of the current applications of explainable deep learning for different medical imaging tasks is presented here. The various approaches, challenges for clinical deployment, and the areas requiring further research are discussed here from a practical standpoint of a deep learning researcher designing a system for the clinical end-users.Comment: Preprint submitted to J.Imaging, MDP

    A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?

    Full text link
    Artificial intelligence (AI) models are increasingly finding applications in the field of medicine. Concerns have been raised about the explainability of the decisions that are made by these AI models. In this article, we give a systematic analysis of explainable artificial intelligence (XAI), with a primary focus on models that are currently being used in the field of healthcare. The literature search is conducted following the preferred reporting items for systematic reviews and meta-analyses (PRISMA) standards for relevant work published from 1 January 2012 to 02 February 2022. The review analyzes the prevailing trends in XAI and lays out the major directions in which research is headed. We investigate the why, how, and when of the uses of these XAI models and their implications. We present a comprehensive examination of XAI methodologies as well as an explanation of how a trustworthy AI can be derived from describing AI models for healthcare fields. The discussion of this work will contribute to the formalization of the XAI field.Comment: 15 pages, 3 figures, accepted for publication in the IEEE Transactions on Artificial Intelligenc

    Explainable artificial intelligence (XAI) in deep learning-based medical image analysis

    Get PDF
    With an increase in deep learning-based methods, the call for explainability of such methods grows, especially in high-stakes decision making areas such as medical image analysis. This survey presents an overview of eXplainable Artificial Intelligence (XAI) used in deep learning-based medical image analysis. A framework of XAI criteria is introduced to classify deep learning-based medical image analysis methods. Papers on XAI techniques in medical image analysis are then surveyed and categorized according to the framework and according to anatomical location. The paper concludes with an outlook of future opportunities for XAI in medical image analysis.Comment: Submitted for publication. Comments welcome by email to first autho

    Uncertainty and interpretability in convolutional neural networks for semantic segmentation of colorectal polyps

    Get PDF
    Colorectal polyps are known to be potential precursors to colorectal cancer, which is one of the leading causes of cancer-related deaths on a global scale. Early detection and prevention of colorectal cancer is primarily enabled through manual screenings, where the intestines of a patient is visually examined. Such a procedure can be challenging and exhausting for the person performing the screening. This has resulted in numerous studies on designing automatic systems aimed at supporting physicians during the examination. Recently, such automatic systems have seen a significant improvement as a result of an increasing amount of publicly available colorectal imagery and advances in deep learning research for object image recognition. Specifically, decision support systems based on Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance on both detection and segmentation of colorectal polyps. However, CNN-based models need to not only be precise in order to be helpful in a medical context. In addition, interpretability and uncertainty in predictions must be well understood. In this paper, we develop and evaluate recent advances in uncertainty estimation and model interpretability in the context of semantic segmentation of polyps from colonoscopy images. Furthermore, we propose a novel method for estimating the uncertainty associated with important features in the input and demonstrate how interpretability and uncertainty can be modeled in DSSs for semantic segmentation of colorectal polyps. Results indicate that deep models are utilizing the shape and edge information of polyps to make their prediction. Moreover, inaccurate predictions show a higher degree of uncertainty compared to precise predictions
    corecore