524 research outputs found

    A Novel Hybrid Model for Automatic Non-Small Cell Lung Cancer Classification Using Histopathological Images

    Get PDF
    Background/Objectives: Despite recent advances in research, cancer remains a significant public health concern and a leading cause of death. Among all cancer types, lung cancer is the most common cause of cancer-related deaths, with most cases linked to non-small cell lung cancer (NSCLC). Accurate classification of NSCLC subtypes is essential for developing treatment strategies. Medical professionals regard tissue biopsy as the gold standard for the identification of lung cancer subtypes. However, since biopsy images have very high resolutions, manual examination is time-consuming and depends on the pathologist’s expertise. Methods: In this study, we propose a hybrid model to assist pathologists in the classification of NSCLC subtypes from histopathological images. This model processes deep, textural and contextual features obtained by using EfficientNet-B0, local binary pattern (LBP) and vision transformer (ViT) encoder as feature extractors, respectively. In the proposed method, each feature matrix is flattened separately and then combined to form a comprehensive feature vector. The feature vector is given as input to machine learning classifiers to identify the NSCLC subtype. Results: We set up 13 different training scenarios to test 4 different classifiers: support vector machine (SVM), logistic regression (LR), light gradient boosting machine (LightGBM) and extreme gradient boosting (XGBoost). Among these scenarios, we obtained the highest classification accuracy (99.87%) with the combination of EfficientNet-B0 + LBP + ViT Encoder + SVM. The proposed hybrid model significantly enhanced the classification accuracy of NSCLC subtypes. Conclusions: The integration of deep, textural, and contextual features assisted the model in capturing subtle information from the images, thereby reducing the risk of misdiagnosis and facilitating more effective treatment planning

    Automated Detection of Neurological and Mental Health Disorders Using EEG Signals and Artificial Intelligence: A Systematic Review

    Get PDF
    Mental and neurological disorders significantly impact global health. This systematic review examines the use of artificial intelligence (AI) techniques to automatically detect these conditions using electroencephalography (EEG) signals. Guided by Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA), we reviewed 74 carefully selected studies published between 2013 and August 2024 that used machine learning (ML), deep learning (DL), or both of these two methods to detect neurological and mental health disorders automatically using EEG signals. The most common and most prevalent neurological and mental health disorder types were sourced from major databases, including Scopus, Web of Science, Science Direct, PubMed, and IEEE Xplore. Epilepsy, depression, and Alzheimer's disease are the most studied conditions that meet our evaluation criteria, 32, 12, and 10 studies were identified on these topics, respectively. Conversely, the number of studies meeting our criteria regarding stress, schizophrenia, Parkinson's disease, and autism spectrum disorders was relatively more average: 6, 4, 3, and 3, respectively. The diseases that least met our evaluation conditions were one study each of seizure, stroke, anxiety diseases, and one study examining Alzheimer's disease and epilepsy together. Support Vector Machines (SVM) were most widely used in ML methods, while Convolutional Neural Networks (CNNs) dominated DL approaches. DL methods generally outperformed traditional ML, as they yielded higher performance using huge EEG data. We observed that the complex decision process during feature extraction from EEG signals in ML-based models significantly impacted results, while DL-based models handled this more efficiently. AI-based EEG analysis shows promise for automated detection of neurological and mental health conditions. Future research should focus on multi-disease studies, standardizing datasets, improving model interpretability, and developing clinical decision support systems to assist in the diagnosis and treatment of these disorders

    Developing an EEG-Based Emotion Recognition Using Ensemble Deep Learning Methods and Fusion of Brain Effective Connectivity Maps

    Get PDF
    The objective of this paper is to develop a novel emotion recognition system from electroencephalogram (EEG) signals using effective connectivity and deep learning methods. Emotion recognition is an important task for various applications such as human-computer interaction and, mental health diagnosis. The paper aims to improve the accuracy and robustness of emotion recognition by combining different effective connectivity (EC) methods and pre-trained convolutional neural networks (CNNs), as well as long short-term memory (LSTM). EC methods measure information flow in the brain during emotional states using EEG signals. We used three EC methods: transfer entropy (TE), partial directed coherence (PDC), and direct directed transfer function (dDTF). We estimated a fused image from these methods for each five-second window of 32-channel EEG signals. Then, we applied six pre-trained CNNs to classify the images into four emotion classes based on the two-dimensional valence-arousal model. We used the leave-one-subject-out cross-validation strategy to evaluate the classification results. We also used an ensemble model to select the best results from the best pre-trained CNNs using the majority voting approach. Moreover, we combined the CNNs with LSTM to improve recognition performance. We achieved the average accuracy and F-score of 98.76%, 98.86%, 98.66 and 98.88% for classifying emotions using DEAP and MAHNOB-HCI datasets, respectively. Our results show that fused images can increase the accuracy and that an ensemble and combination of pre-trained CNNs and LSTM can achieve high accuracy for automated emotion recognition. Our model outperformed other state-of-the-art systems using the same datasets for four-class emotion classification. © 2013 IEEE

    ConcatNeXt: An automated blood cell classification with a new deep convolutional neural network

    Get PDF
    Examining peripheral blood smears is valuable in clinical settings, yet manual identification of blood cells proves time-consuming. To address this, an automated blood cell image classification system is crucial. Our objective is to develop a precise automated model for detecting various blood cell types, leveraging a novel deep learning architecture. We harnessed a publicly available dataset of 17,092 blood cell images categorized into eight classes. Our innovation lies in ConcatNeXt, a new convolutional neural network. In the spirit of Geoffrey Hinton's approach, we adapted ConvNeXt by substituting the Gaussian error linear unit with a rectified linear unit and layer normalization with batch normalization. We introduced depth concatenation blocks to fuse information effectively and incorporated a patchify layer. Integrating ConcatNeXt with nested patch-based deep feature engineering, featuring downstream iterative neighborhood component analysis and support vector machine-based functions, establishes a comprehensive approach. ConcatNeXt achieved notable validation and test accuracies of 97.43% and 97.77%, respectively. The ConcatNeXt-based feature engineering model further elevated accuracy to 98.73%. Gradient-weighted class activation maps were employed to provide interpretability, offering valuable insights into model decision-making. Our proposed ConcatNeXt and nested patch-based deep feature engineering models excel in blood cell image classification, showcasing remarkable classification performances. These innovations mark significant strides in computer vision-based blood cell analysis. © The Author(s) 2024

    Application of Patient-Specific Computational Fluid Dynamics in Coronary and Intra-Cardiac Flow Simulations: Challenges and Opportunities

    Get PDF
    The emergence of new cardiac diagnostics and therapeutics of the heart has given rise to the challenging field of virtual design and testing of technologies in a patient-specific environment. Given the recent advances in medical imaging, computational power and mathematical algorithms, patient-specific cardiac models can be produced from cardiac images faster, and more efficiently than ever before. The emergence of patient-specific computational fluid dynamics (CFD) has paved the way for the new field of computer-aided diagnostics. This article provides a review of CFD methods, challenges and opportunities in coronary and intra-cardiac flow simulations. It includes a review of market products and clinical trials. Key components of patient-specific CFD are covered briefly which include image segmentation, geometry reconstruction, mesh generation, fluid-structure interaction, and solver techniques

    ExHyptNet: An explainable diagnosis of hypertension using EfficientNet with PPG signals

    Get PDF
    Background Hypertension is a crucial health indicator because it provides subtle details about a patient's cardiac health. Photoplethysmography (PPG) signals are a critical biological marker used for the early detection and diagnosis of hypertension. Objective The existing hypertension detection models cannot explain the model’s prediction, making it unreliable for clinicians. The proposed study aims to develop an explainable and effective hypertension detection (ExHyptNet) model using PPG signals. Methods The proposed ExHyptNet model is an ensemble of multi-level feature analyses used to detect and explain hypertension predictions. In the feature extraction stage, recurrence plots and EfficientNetB3 architecture are employed to extract deep features from the PPG signals. Then, features are explained using a Gradient-weighted Class Activation Mapping (Grad-CAM) explainer in the explainable stage. In the last stage, XG-Boost and extremely randomized trees (ERT) classifiers are used to make the qualitative and quantitative analysis for evaluating the performance of the proposed ExHyptNet model. Results The performance of the ExHyptNet model is evaluated on two public PPG datasets: PPG-BP and MIMIC-II, using holdout, stratified 10-fold cross-validation, and leave-one-out subject validation techniques. The developed model yielded a 100% detection rate for the classification of normal and multi-stage hypertension classes using three validation techniques. The proposed work also demonstrates a detailed ablation study using hyper-parameters, pre-trained models, and the detection of several PPG categories. Conclusion The developed ExHyptNet model performed better than the existing automated hypertension detection systems. Our proposed model is practically realizable to clinicians in real-time hypertension detection as it is validated on two public PPG datasets using different validation techniques

    Automated detection of atrial fibrillation using long short-term memory network with RR interval signals

    Get PDF
    Atrial Fibrillation (AF), either permanent or intermittent (paroxysnal AF), increases the risk of cardioembolic stroke. Accurate diagnosis of AF is obligatory for initiation of effective treatment to prevent stroke. Long term cardiac monitoring improves the likelihood of diagnosing paroxysmal AF. We used a deep learning system to detect AF beats in Heart Rate (HR) signals. The data was partitioned with a sliding window of 100 beats. The resulting signal blocks were directly fed into a deep Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM). The system was validated and tested with data from the MIT-BIH Atrial Fibrillation Database. It achieved 98.51% accuracy with 10-fold cross-validation (20 subjects) and 99.77% with blindfold validation (3 subjects). The proposed system structure is straight forward, because there is no need for information reduction through feature extraction. All the complexity resides in the deep learning system, which gets the entire information from a signal block. This setup leads to the robust performance for unknown data, as measured with the blind fold validation. The proposed Computer-Aided Diagnosis (CAD) system can be used for long-term monitoring of the human heart. To the best of our knowledge, the proposed system is the first to incorporate deep learning for AF beat detection

    Black-white hole pattern: an investigation on the automated chronic neuropathic pain detection using EEG signals

    Get PDF
    Electroencephalography (EEG) signals provide information about the brain activities, this study bridges neuroscience and machine learning by introducing an astronomy-inspired feature extraction model. In this work, we developed a novel feature extraction function, black-white hole pattern (BWHPat) which dynamically selects the most suitable pattern from 14 options. We developed BWHPat in a four-phase feature engineering model, involving multileveled feature extraction, feature selection, classification, and cortex map generation. Textural and statistical features are extracted in the first phase, while tunable q-factor wavelet transform (TQWT) aids in multileveled feature extraction. The second phase employs iterative neighborhood component analysis (INCA) for feature selection, and the k-nearest neighbors (kNN) classifier is applied for classification, yielding channel-specific results. A new cortex map generation model highlights the most active channels using median and intersection functions. Our BWHPat-driven model consistently achieved over 99% classification accuracy across three scenarios using the publicly available EEG pain dataset. Furthermore, a semantic cortex map precisely identifies pain-affected brain regions. This study signifies the contribution to EEG signal classification and neuroscience. The BWHPat pattern establishes a unique link between astronomy and feature extraction, enhancing the understanding of brain activities

    Effect of Frailty on Cardiovascular Clinical Trials:A Systematic Review And Meta-analysis

    Get PDF
    Background: Patients with cardiovascular (CV) diseases are increasingly frail but rarely represented in trials. Understanding effect modification by frailty on CV trials is critical as it could help define treatment strategies in frail patients. Objectives: This meta-analysis aims to assess the implications of frailty on CV outcomes in clinical trials. Methods: Randomized controlled trials examining the effects of frailty in the context of CV trials were included (CRD42024528279). Outcomes included a composite of major adverse cardiac events (MACE), all-cause mortality, CV mortality, hospitalizations, and frailty-specific outcomes (physical, quality of life, and frailty scores). HRs and 95% CIs were pooled for clinical endpoints, and standardized mean differences (SMDs) were calculated for frailty-specific outcomes. Results: Thirty unique randomized controlled trials were included with a pooled total of 87,711 participants. Frail patients had a significantly increased risk of MACE (HR: 2.33 [95% CI: 1.87-2.91], P &lt; 0.001, I 2 = 83%), all-cause mortality (HR: 2.34 [95% CI: 1.80-3.05], P &lt; 0.01, I 2 = 75%), CV mortality (HR: 1.76 [95% CI: 1.60-1.93], P &lt; 0.001, I 2 = 0%), and hospitalizations (HR: 2.38 [95% CI: 1.65-3.43], P &lt; 0.001, I 2 = 92%) compared to nonfrail patients. In the frailest group, trial interventions decreased MACE (HR: 0.81 [95% CI: 0.74-0.88], P &lt; 0.001, I 2 = 0%) and hospitalization (HR: 0.81 [95% CI: 0.72-0.90], P &lt; 0.001, I 2 = 0%) risks with no significant difference in mortality risk (P &gt; 0.05) compared with the control group. Trial interventions significantly improved physical (SMD: 0.15, 0.04-0.26) and quality of life (SMD: 0.15, 0.09-0.21) but not frailty scores (P &gt; 0.05). Conclusions: While frailty prognosticated a higher risk of CV events and mortality, frailty did not reduce treatment efficacy. CV trial interventions appear beneficial even in the frailest group.</p
    corecore