20 research outputs found

    Deep Active Learning for Automatic Mitotic Cell Detection on HEp-2 Specimen Medical Images

    Get PDF
    Identifying Human Epithelial Type 2 (HEp-2) mitotic cells is a crucial procedure in anti-nuclear antibodies (ANAs) testing, which is the standard protocol for detecting connective tissue diseases (CTD). Due to the low throughput and labor-subjectivity of the ANAs' manual screening test, there is a need to develop a reliable HEp-2 computer-aided diagnosis (CAD) system. The automatic detection of mitotic cells from the microscopic HEp-2 specimen images is an essential step to support the diagnosis process and enhance the throughput of this test. This work proposes a deep active learning (DAL) approach to overcoming the cell labeling challenge. Moreover, deep learning detectors are tailored to automatically identify the mitotic cells directly in the entire microscopic HEp-2 specimen images, avoiding the segmentation step. The proposed framework is validated using the I3A Task-2 dataset over 5-fold cross-validation trials. Using the YOLO predictor, promising mitotic cell prediction results are achieved with an average of 90.011% recall, 88.307% precision, and 81.531% mAP. Whereas, average scores of 86.986% recall, 85.282% precision, and 78.506% mAP are obtained using the Faster R-CNN predictor. Employing the DAL method over four labeling rounds effectively enhances the accuracy of the data annotation, and hence, improves the prediction performance. The proposed framework could be practically applicable to support medical personnel in making rapid and accurate decisions about the mitotic cells' existence

    Artificial Intelligence for Medical Diagnostics—Existing and Future AI Technology!

    No full text
    We would like to express our gratitude to all authors who contributed to the Special Issue of “Artificial Intelligence Advances for Medical Computer-Aided Diagnosis” by providing their excellent and recent research findings for AI-based medical diagnosis [...

    Dual-Sensor Signals Based Exact Gaussian Process-Assisted Hybrid Feature Extraction and Weighted Feature Fusion for Respiratory Rate and Uncertainty Estimations

    No full text
    Accurately estimating respiratory rate (RR) has become essential for patients and the elderly. Hence, we propose a novel method that uses exact Gaussian process regression (EGPR)-assisted hybrid feature extraction and feature fusion based on photoplethysmography and electrocardiogram signals to improve the reliability of accurate RR and uncertainty estimations. First, we obtain the power spectral features and use the multi-phase feature model to compensate for insufficient input data. Then, we combine four different feature sets and choose features with high weights using a robust neighbor component analysis. The proposed EGPR algorithm provides a confidence interval representing the uncertainty. Therefore, the proposed EGPR algorithm, including hybrid feature extraction and weighted feature fusion, is an excellent model with improved reliability for accurate RR estimation. Furthermore, the proposed EGPR methodology is likely the only one currently available that provides highly stable variation and confidence intervals. The proposed EGPR-MF, 0.993 breath per minute (bpm), and EGPR-feature fusion, 1.064 (bpm), show the lowest mean absolute error compared to the other models

    ArCAR: A Novel Deep Learning Computer-Aided Recognition for Character-Level Arabic Text Representation and Recognition

    No full text
    Arabic text classification is a process to simultaneously categorize the different contextual Arabic contents into a proper category. In this paper, a novel deep learning Arabic text computer-aided recognition (ArCAR) is proposed to represent and recognize Arabic text at the character level. The input Arabic text is quantized in the form of 1D vectors for each Arabic character to represent a 2D array for the ArCAR system. The ArCAR system is validated over 5-fold cross-validation tests for two applications: Arabic text document classification and Arabic sentiment analysis. For document classification, the ArCAR system achieves the best performance using the Alarabiya-balance dataset in terms of overall accuracy, recall, precision, and F1-score by 97.76%, 94.08%, 94.16%, and 94.09%, respectively. Meanwhile, the ArCAR performs well for Arabic sentiment analysis, achieving the best performance using the hotel Arabic reviews dataset (HARD) balance dataset in terms of overall accuracy and F1-score by 93.58% and 93.23%, respectively. The proposed ArCAR seems to provide a practical solution for accurate Arabic text representation, understanding, and classification

    A Novel Deep Learning ArCAR System for Arabic Text Recognition with Character-Level Representation

    No full text
    AI-based text classification is a process to classify Arabic contents into their categories. With the increasing number of Arabic texts in our social life, traditional machine learning approaches are facing different challenges due to the complexity of the morphology and the delicate variation of the Arabic language. This work proposes a model to represent and recognize Arabic text at the character level based on the capability of a deep convolutional neural network (CNN). This system was validated using five-fold cross-validation tests for Arabic text document classification. We have used our proposed system to evaluate Arabic text. The ArCAR system shows its capability to classify Arabic text in character-level. For document classification, the ArCAR system achieves the best performance using the AlKhaleej-balance dataset in terms of accuracy equal to 97.76%. The proposed ArCAR seems to provide a practical solution for accurate Arabic text representation, both for understanding and as a classifications system

    Deep Learning Cascaded Feature Selection Framework for Breast Cancer Classification: Hybrid CNN with Univariate-Based Approach

    No full text
    With the help of machine learning, many of the problems that have plagued mammography in the past have been solved. Effective prediction models need many normal and tumor samples. For medical applications such as breast cancer diagnosis framework, it is difficult to gather labeled training data and construct effective learning frameworks. Transfer learning is an emerging strategy that has recently been used to tackle the scarcity of medical data by transferring pre-trained convolutional network knowledge into the medical domain. Despite the well reputation of the transfer learning based on the pre-trained Convolutional Neural Networks (CNN) for medical imaging, several hurdles still exist to achieve a prominent breast cancer classification performance. In this paper, we attempt to solve the Feature Dimensionality Curse (FDC) problem of the deep features that are derived from the transfer learning pre-trained CNNs. Such a problem is raised due to the high space dimensionality of the extracted deep features with respect to the small size of the available medical data samples. Therefore, a novel deep learning cascaded feature selection framework is proposed based on the pre-trained deep convolutional networks as well as the univariate-based paradigm. Deep learning models of AlexNet, VGG, and GoogleNet are randomly selected and used to extract the shallow and deep features from the INbreast mammograms, whereas the univariate strategy helps to overcome the dimensionality curse and multicollinearity issues for the extracted features. The optimized key features via the univariate approach are statistically significant (p-value ≤ 0.05) and have good capability to efficiently train the classification models. Using such optimal features, the proposed framework could achieve a promising evaluation performance in terms of 98.50% accuracy, 98.06% sensitivity, 98.99% specificity, and 98.98% precision. Such performance seems to be beneficial to develop a practical and reliable computer-aided diagnosis (CAD) framework for breast cancer classification

    ETECADx: Ensemble Self-Attention Transformer Encoder for Breast Cancer Diagnosis Using Full-Field Digital X-ray Breast Images

    No full text
    Early detection of breast cancer is an essential procedure to reduce the mortality rate among women. In this paper, a new AI-based computer-aided diagnosis (CAD) framework called ETECADx is proposed by fusing the benefits of both ensemble transfer learning of the convolutional neural networks as well as the self-attention mechanism of vision transformer encoder (ViT). The accurate and precious high-level deep features are generated via the backbone ensemble network, while the transformer encoder is used to diagnose the breast cancer probabilities in two approaches: Approach A (i.e., binary classification) and Approach B (i.e., multi-classification). To build the proposed CAD system, the benchmark public multi-class INbreast dataset is used. Meanwhile, private real breast cancer images are collected and annotated by expert radiologists to validate the prediction performance of the proposed ETECADx framework. The promising evaluation results are achieved using the INbreast mammograms with overall accuracies of 98.58% and 97.87% for the binary and multi-class approaches, respectively. Compared with the individual backbone networks, the proposed ensemble learning model improves the breast cancer prediction performance by 6.6% for binary and 4.6% for multi-class approaches. The proposed hybrid ETECADx shows further prediction improvement when the ViT-based ensemble backbone network is used by 8.1% and 6.2% for binary and multi-class diagnosis, respectively. For validation purposes using the real breast images, the proposed CAD system provides encouraging prediction accuracies of 97.16% for binary and 89.40% for multi-class approaches. The ETECADx has a capability to predict the breast lesions for a single mammogram in an average of 0.048 s. Such promising performance could be useful and helpful to assist the practical CAD framework applications providing a second supporting opinion of distinguishing various breast cancer malignancies

    Deep Learning Cascaded Feature Selection Framework for Breast Cancer Classification: Hybrid CNN with Univariate-Based Approach

    No full text
    With the help of machine learning, many of the problems that have plagued mammography in the past have been solved. Effective prediction models need many normal and tumor samples. For medical applications such as breast cancer diagnosis framework, it is difficult to gather labeled training data and construct effective learning frameworks. Transfer learning is an emerging strategy that has recently been used to tackle the scarcity of medical data by transferring pre-trained convolutional network knowledge into the medical domain. Despite the well reputation of the transfer learning based on the pre-trained Convolutional Neural Networks (CNN) for medical imaging, several hurdles still exist to achieve a prominent breast cancer classification performance. In this paper, we attempt to solve the Feature Dimensionality Curse (FDC) problem of the deep features that are derived from the transfer learning pre-trained CNNs. Such a problem is raised due to the high space dimensionality of the extracted deep features with respect to the small size of the available medical data samples. Therefore, a novel deep learning cascaded feature selection framework is proposed based on the pre-trained deep convolutional networks as well as the univariate-based paradigm. Deep learning models of AlexNet, VGG, and GoogleNet are randomly selected and used to extract the shallow and deep features from the INbreast mammograms, whereas the univariate strategy helps to overcome the dimensionality curse and multicollinearity issues for the extracted features. The optimized key features via the univariate approach are statistically significant (p-value ≤ 0.05) and have good capability to efficiently train the classification models. Using such optimal features, the proposed framework could achieve a promising evaluation performance in terms of 98.50% accuracy, 98.06% sensitivity, 98.99% specificity, and 98.98% precision. Such performance seems to be beneficial to develop a practical and reliable computer-aided diagnosis (CAD) framework for breast cancer classification

    PLDPNet: End-to-end hybrid deep learning framework for potato leaf disease prediction

    No full text
    Agricultural productivity plays a vital role in global economic development and growth. When crops are affected by diseases, it adversely impacts a nation’s economic resources and agricultural output. Early detection of crop diseases can minimize losses for farmers and enhance production. In this study, we propose a new hybrid deep learning model, PLDPNet, designed to automatically predict potato leaf diseases. The PLDPNet framework encompasses image collection, pre-processing, segmentation, feature extraction and fusion, and classification. We employ an ensemble approach by combining deep features from two well-established models (VGG19 and Inception-V3) to generate more powerful features. The hybrid approach leverages the concept of vision transformers for final prediction. To train and evaluate PLDPNet, we utilize the public potato leaf dataset: early blight, late blight, and healthy leaves. Utilizing the strength of segmentation and fusion feature, the proposed approach achieves an overall accuracy of 98.66%, and F1-score of 96.33%. A comprehensive validation study is conducted using Apple (4 classes) and tomato (10 classes) datasets achieving impressive accuracies of 96.42% and 94.25%, respectively. These experimental findings confirm that the proposed hybrid framework provides more effective and accurate detection and prediction of potato crop diseases, making it a promising candidate for practical applications

    ETECADx: Ensemble Self-Attention Transformer Encoder for Breast Cancer Diagnosis Using Full-Field Digital X-ray Breast Images

    No full text
    Early detection of breast cancer is an essential procedure to reduce the mortality rate among women. In this paper, a new AI-based computer-aided diagnosis (CAD) framework called ETECADx is proposed by fusing the benefits of both ensemble transfer learning of the convolutional neural networks as well as the self-attention mechanism of vision transformer encoder (ViT). The accurate and precious high-level deep features are generated via the backbone ensemble network, while the transformer encoder is used to diagnose the breast cancer probabilities in two approaches: Approach A (i.e., binary classification) and Approach B (i.e., multi-classification). To build the proposed CAD system, the benchmark public multi-class INbreast dataset is used. Meanwhile, private real breast cancer images are collected and annotated by expert radiologists to validate the prediction performance of the proposed ETECADx framework. The promising evaluation results are achieved using the INbreast mammograms with overall accuracies of 98.58% and 97.87% for the binary and multi-class approaches, respectively. Compared with the individual backbone networks, the proposed ensemble learning model improves the breast cancer prediction performance by 6.6% for binary and 4.6% for multi-class approaches. The proposed hybrid ETECADx shows further prediction improvement when the ViT-based ensemble backbone network is used by 8.1% and 6.2% for binary and multi-class diagnosis, respectively. For validation purposes using the real breast images, the proposed CAD system provides encouraging prediction accuracies of 97.16% for binary and 89.40% for multi-class approaches. The ETECADx has a capability to predict the breast lesions for a single mammogram in an average of 0.048 s. Such promising performance could be useful and helpful to assist the practical CAD framework applications providing a second supporting opinion of distinguishing various breast cancer malignancies
    corecore