11 research outputs found

    Mobile Health in Remote Patient Monitoring for Chronic Diseases: Principles, Trends, and Challenges

    Get PDF
    Chronic diseases are becoming more widespread. Treatment and monitoring of these diseases require going to hospitals frequently, which increases the burdens of hospitals and patients. Presently, advancements in wearable sensors and communication protocol contribute to enriching the healthcare system in a way that will reshape healthcare services shortly. Remote patient monitoring (RPM) is the foremost of these advancements. RPM systems are based on the collection of patient vital signs extracted using invasive and noninvasive techniques, then sending them in real-time to physicians. These data may help physicians in taking the right decision at the right time. The main objective of this paper is to outline research directions on remote patient monitoring, explain the role of AI in building RPM systems, make an overview of the state of the art of RPM, its advantages, its challenges, and its probable future directions. For studying the literature, five databases have been chosen (i.e., science direct, IEEE-Explore, Springer, PubMed, and science.gov). We followed the (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) PRISMA, which is a standard methodology for systematic reviews and meta-analyses. A total of 56 articles are reviewed based on the combination of a set of selected search terms including RPM, data mining, clinical decision support system, electronic health record, cloud computing, internet of things, and wireless body area network. The result of this study approved the effectiveness of RPM in improving healthcare delivery, increase diagnosis speed, and reduce costs. To this end, we also present the chronic disease monitoring system as a case study to provide enhanced solutions for RPMsThis research work was partially supported by the Sejong University Research Faculty Program (20212023)S

    Comprehensive Survey of Using Machine Learning in the COVID-19 Pandemic

    Get PDF
    Since December 2019, the global health population has faced the rapid spreading of coronavirus disease (COVID-19). With the incremental acceleration of the number of infected cases, the World Health Organization (WHO) has reported COVID-19 as an epidemic that puts a heavy burden on healthcare sectors in almost every country. The potential of artificial intelligence (AI) in this context is difficult to ignore. AI companies have been racing to develop innovative tools that contribute to arm the world against this pandemic and minimize the disruption that it may cause. The main objective of this study is to survey the decisive role of AI as a technology used to fight against the COVID-19 pandemic. Five significant applications of AI for COVID-19 were found, including (1) COVID-19 diagnosis using various data types (e.g., images, sound, and text); (2) estimation of the possible future spread of the disease based on the current confirmed cases; (3) association between COVID-19 infection and patient characteristics; (4) vaccine development and drug interaction; and (5) development of supporting applications. This study also introduces a comparison between current COVID-19 datasets. Based on the limitations of the current literature, this review highlights the open research challenges that could inspire the future application of AI in COVID-19This work was supported by a 2021 Incheon National University Research Grant. This work was also supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2020R1A4A4079299)S

    Machine Learning Based Diagnostic Paradigm in Viral and Non-Viral Hepatocellular Carcinoma

    Get PDF
    Ā© 2024 The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY), https://creativecommons.org/licenses/by/4.0/Viral and non-viral hepatocellular carcinoma (HCC) is becoming predominant in developing countries. A major issue linked to HCC-related mortality rate is the late diagnosis of cancer development. Although traditional approaches to diagnosing HCC have become gold-standard, there remain several limitations due to which the confirmation of cancer progression takes a longer period. The recent emergence of artificial intelligence tools with the capacity to analyze biomedical datasets is assisting traditional diagnostic approaches for early diagnosis with certainty. Here we present a review of traditional HCC diagnostic approaches versus the use of artificial intelligence (Machine Learning and Deep Learning) for HCC diagnosis. The overview of the cancer-related databases along with the use of AI in histopathology, radiology, biomarker, and electronic health records (EHRs) based HCC diagnosis is given.Peer reviewe

    An effective approach for plant leaf diseases classification based on a novel DeepPlantNet deep learning model

    Get PDF
    IntroductionRecently, plant disease detection and diagnosis procedures have become a primary agricultural concern. Early detection of plant diseases enables farmers to take preventative action, stopping the disease's transmission to other plant sections. Plant diseases are a severe hazard to food safety, but because the essential infrastructure is missing in various places around the globe, quick disease diagnosis is still difficult. The plant may experience a variety of attacks, from minor damage to total devastation, depending on how severe the infections are. Thus, early detection of plant diseases is necessary to optimize output to prevent such destruction. The physical examination of plant diseases produced low accuracy, required a lot of time, and could not accurately anticipate the plant disease. Creating an automated method capable of accurately classifying to deal with these issues is vital. MethodThis research proposes an efficient, novel, and lightweight DeepPlantNet deep learning (DL)-based architecture for predicting and categorizing plant leaf diseases. The proposed DeepPlantNet model comprises 28 learned layers, i.e., 25 convolutional layers (ConV) and three fully connected (FC) layers. The framework employed Leaky RelU (LReLU), batch normalization (BN), fire modules, and a mix of 3Ɨ3 and 1Ɨ1 filters, making it a novel plant disease classification framework. The Proposed DeepPlantNet model can categorize plant disease images into many classifications.ResultsThe proposed approach categorizes the plant diseases into the following ten groups: Apple_Black_rot (ABR), Cherry_(including_sour)_Powdery_mildew (CPM), Grape_Leaf_blight_(Isariopsis_Leaf_Spot) (GLB), Peach_Bacterial_spot (PBS), Pepper_bell_Bacterial_spot (PBBS), Potato_Early_blight (PEB), Squash_Powdery_mildew (SPM), Strawberry_Leaf_scorch (SLS), bacterial tomato spot (TBS), and maize common rust (MCR). The proposed framework achieved an average accuracy of 98.49 and 99.85in the case of eight-class and three-class classification schemes, respectively.DiscussionThe experimental findings demonstrated the DeepPlantNet model's superiority to the alternatives. The proposed technique can reduce financial and agricultural output losses by quickly and effectively assisting professionals and farmers in identifying plant leaf diseases

    A Holistic Approach to Identify and Classify COVID-19 from Chest Radiographs, ECG, and CT-Scan Images Using ShuffleNet Convolutional Neural Network

    No full text
    Early and precise COVID-19 identification and analysis are pivotal in reducing the spread of COVID-19. Medical imaging techniques, such as chest X-ray or chest radiographs, computed tomography (CT) scan, and electrocardiogram (ECG) trace images are the most widely known for early discovery and analysis of the coronavirus disease (COVID-19). Deep learning (DL) frameworks for identifying COVID-19 positive patients in the literature are limited to one data format, either ECG or chest radiograph images. Moreover, using several data types to recover abnormal patterns caused by COVID-19 could potentially provide more information and restrict the spread of the virus. This study presents an effective COVID-19 detection and classification approach using the Shufflenet CNN by employing three types of images, i.e., chest radiograph, CT-scan, and ECG-trace images. For this purpose, we performed extensive classification experiments with the proposed approach using each type of image. With the chest radiograph dataset, we performed three classification experiments at different levels of granularity, i.e., binary, three-class, and four-class classifications. In addition, we performed a binary classification experiment with the proposed approach by classifying CT-scan images into COVID-positive and normal. Finally, utilizing the ECG-trace images, we conducted three experiments at different levels of granularity, i.e., binary, three-class, and five-class classifications. We evaluated the proposed approach with the baseline COVID-19 Radiography Database, SARS-CoV-2 CT-scan, and ECG images dataset of cardiac and COVID-19 patients. The average accuracy of 99.98% for COVID-19 detection in the three-class classification scheme using chest radiographs, optimal accuracy of 100% for COVID-19 detection using CT scans, and average accuracy of 99.37% for five-class classification scheme using ECG trace images have proved the efficacy of our proposed method over the contemporary methods. The optimal accuracy of 100% for COVID-19 detection using CT scans and the accuracy gain of 1.54% (in the case of five-class classification using ECG trace images) from the previous approach, which utilized ECG images for the first time, has a major contribution to improving the COVID-19 prediction rate in early stages. Experimental findings demonstrate that the proposed framework outperforms contemporary models. For example, the proposed approach outperforms state-of-the-art DL approaches, such as Squeezenet, Alexnet, and Darknet19, by achieving the accuracy of 99.98 (proposed method), 98.29, 98.50, and 99.67, respectively

    A Holistic Approach to Identify and Classify COVID-19 from Chest Radiographs, ECG, and CT-Scan Images Using ShuffleNet Convolutional Neural Network

    No full text
    Early and precise COVID-19 identification and analysis are pivotal in reducing the spread of COVID-19. Medical imaging techniques, such as chest X-ray or chest radiographs, computed tomography (CT) scan, and electrocardiogram (ECG) trace images are the most widely known for early discovery and analysis of the coronavirus disease (COVID-19). Deep learning (DL) frameworks for identifying COVID-19 positive patients in the literature are limited to one data format, either ECG or chest radiograph images. Moreover, using several data types to recover abnormal patterns caused by COVID-19 could potentially provide more information and restrict the spread of the virus. This study presents an effective COVID-19 detection and classification approach using the Shufflenet CNN by employing three types of images, i.e., chest radiograph, CT-scan, and ECG-trace images. For this purpose, we performed extensive classification experiments with the proposed approach using each type of image. With the chest radiograph dataset, we performed three classification experiments at different levels of granularity, i.e., binary, three-class, and four-class classifications. In addition, we performed a binary classification experiment with the proposed approach by classifying CT-scan images into COVID-positive and normal. Finally, utilizing the ECG-trace images, we conducted three experiments at different levels of granularity, i.e., binary, three-class, and five-class classifications. We evaluated the proposed approach with the baseline COVID-19 Radiography Database, SARS-CoV-2 CT-scan, and ECG images dataset of cardiac and COVID-19 patients. The average accuracy of 99.98% for COVID-19 detection in the three-class classification scheme using chest radiographs, optimal accuracy of 100% for COVID-19 detection using CT scans, and average accuracy of 99.37% for five-class classification scheme using ECG trace images have proved the efficacy of our proposed method over the contemporary methods. The optimal accuracy of 100% for COVID-19 detection using CT scans and the accuracy gain of 1.54% (in the case of five-class classification using ECG trace images) from the previous approach, which utilized ECG images for the first time, has a major contribution to improving the COVID-19 prediction rate in early stages. Experimental findings demonstrate that the proposed framework outperforms contemporary models. For example, the proposed approach outperforms state-of-the-art DL approaches, such as Squeezenet, Alexnet, and Darknet19, by achieving the accuracy of 99.98 (proposed method), 98.29, 98.50, and 99.67, respectively

    Polycystic Ovary Syndrome Detection Machine Learning Model Based on Optimized Feature Selection and Explainable Artificial Intelligence

    No full text
    Polycystic ovary syndrome (PCOS) has been classified as a severe health problem common among women globally. Early detection and treatment of PCOS reduce the possibility of long-term complications, such as increasing the chances of developing type 2 diabetes and gestational diabetes. Therefore, effective and early PCOS diagnosis will help the healthcare systems to reduce the diseaseā€™s problems and complications. Machine learning (ML) and ensemble learning have recently shown promising results in medical diagnostics. The main goal of our research is to provide model explanations to ensure efficiency, effectiveness, and trust in the developed model through local and global explanations. Feature selection methods with different types of ML models (logistic regression (LR), random forest (RF), decision tree (DT), naive Bayes (NB), support vector machine (SVM), k-nearest neighbor (KNN), xgboost, and Adaboost algorithm to get optimal feature selection and best model. Stacking ML models that combine the best base ML models with meta-learner are proposed to improve performance. Bayesian optimization is used to optimize ML models. Combining SMOTE (Synthetic Minority Oversampling Techniques) and ENN (Edited Nearest Neighbour) solves the class imbalance. The experimental results were made using a benchmark PCOS dataset with two ratios splitting 70:30 and 80:20. The result showed that the Stacking ML with REF feature selection recorded the highest accuracy at 100 compared to other models

    A Clinical Decision Support System for Edge/Cloud ICU Readmission Model Based on Particle Swarm Optimization, Ensemble Machine Learning, and Explainable Artificial Intelligence

    No full text
    ICU readmission is usually associated with an increased number of hospital death. Predicting readmission helps to reduce such risks by avoiding early discharge, providing appropriate intervention, and planning for patient placement after ICU discharge. Unfortunately, ICU scores such as the simplified acute physiology score (SAPS) and Acute Physiology and Chronic Health (APACHE) could help predict mortality or evaluate illness severity. Still, it is ineffective in predicting ICU readmission. This study introduces a clinical monitoring fog-computing-based system for remote prognosis and monitoring of intensive care patients. This proposed monitoring system uses the advantages of machine learning (ML) approaches for generating a real-time alert signal to doctors for supplying e-healthcare, accelerating decision-making, and monitoring and controlling health systems. The proposed system includes three main layers. First, the data acquisition layer, in which we collect the vital signs and lab tests of the patient’s health conditions in real-time. Then, the fog computing layer processes. The results are then sent to the cloud layer, which offers sizable storage space for patient healthcare. Demographic data, lab tests, and vital signs are aggregated from the MIMIC III dataset for 10,465 patients. Feature selection methods: Genetic algorithm (GA) and practical swarm optimization (PSO) are used to choose the optimal feature subset from detests. Moreover, Different traditional ML models, ensemble learning models, and the proposed stacking models are applied to full features and selected features to predict readmission after 30 days of ICU discharge. The proposed stacking models recorded the highest performance compared to other models. The proposed stacking ensemble model with selected features by POS achieved promising results (accuracy = 98.42, precision = 98.42, recall = 98.42, and F1-Score = 98.42), compared to full features and selected features. We also, provide model explanations to ensure efficiency, effectiveness, and trust in the developed model through local and global explanations

    Predicting CTS Diagnosis and Prognosis Based on Machine Learning Techniques

    No full text
    Carpal tunnel syndrome (CTS) is a clinical disease that occurs due to compression of the median nerve in the carpal tunnel. The determination of the severity of carpal tunnel syndrome is essential to provide appropriate therapeutic interventions. Machine learning (ML)-based modeling can be used to classify diseases, make decisions, and create new therapeutic interventions. It is also used in medical research to implement predictive models. However, despite the growth in medical research based on ML and Deep Learning (DL), CTS research is still relatively scarce. While a few studies have developed models to predict diagnosis of CTS, no ML model has been presented to classify the severity of CTS based on comprehensive clinical data. Therefore, this study developed new classification models for determining CTS severity using ML algorithms. This study included 80 patients with other diseases that have an overlap in symptoms with CTS, such as cervical radiculopathysasas, de quervian tendinopathy, and peripheral neuropathy, and 80 CTS patients who underwent ultrasonography (US)-guided median nerve hydrodissection. CTS severity was classified into mild, moderate, and severe grades. In our study, we aggregated the data from CTS patients and patients with other diseases that have an overlap in symptoms with CTS, such as cervical radiculopathysasas, de quervian tendinopathy, and peripheral neuropathy. The dataset was randomly split into training and test data, at 70% and 30%, respectively. The proposed model achieved promising results of 0.955%, 0.963%, and 0.919% in terms of classification accuracy, precision, and recall, respectively. In addition, we developed a machine learning model that predicts the probability of a patient improving after the hydro-dissection injection process based on the aggregated data after three different months (one, three, and six). The proposed model achieved accuracy after six months of 0.912%, after three months of 0.901%, and after one month 0.877%. The overall performance for predicting the prognosis after six months outperforms the prediction after one and three months. We utilized statistics tests (significance test, Spearman’s correlation test, and two-way ANOVA test) to determine the effect of injection process in CTS treatment. Our data-driven decision support tools can be used to help determine which patients to operate on in order to avoid the associated risks and expenses of surgery

    Predicting CTS Diagnosis and Prognosis Based on Machine Learning Techniques

    No full text
    Carpal tunnel syndrome (CTS) is a clinical disease that occurs due to compression of the median nerve in the carpal tunnel. The determination of the severity of carpal tunnel syndrome is essential to provide appropriate therapeutic interventions. Machine learning (ML)-based modeling can be used to classify diseases, make decisions, and create new therapeutic interventions. It is also used in medical research to implement predictive models. However, despite the growth in medical research based on ML and Deep Learning (DL), CTS research is still relatively scarce. While a few studies have developed models to predict diagnosis of CTS, no ML model has been presented to classify the severity of CTS based on comprehensive clinical data. Therefore, this study developed new classification models for determining CTS severity using ML algorithms. This study included 80 patients with other diseases that have an overlap in symptoms with CTS, such as cervical radiculopathysasas, de quervian tendinopathy, and peripheral neuropathy, and 80 CTS patients who underwent ultrasonography (US)-guided median nerve hydrodissection. CTS severity was classified into mild, moderate, and severe grades. In our study, we aggregated the data from CTS patients and patients with other diseases that have an overlap in symptoms with CTS, such as cervical radiculopathysasas, de quervian tendinopathy, and peripheral neuropathy. The dataset was randomly split into training and test data, at 70% and 30%, respectively. The proposed model achieved promising results of 0.955%, 0.963%, and 0.919% in terms of classification accuracy, precision, and recall, respectively. In addition, we developed a machine learning model that predicts the probability of a patient improving after the hydro-dissection injection process based on the aggregated data after three different months (one, three, and six). The proposed model achieved accuracy after six months of 0.912%, after three months of 0.901%, and after one month 0.877%. The overall performance for predicting the prognosis after six months outperforms the prediction after one and three months. We utilized statistics tests (significance test, Spearmanā€™s correlation test, and two-way ANOVA test) to determine the effect of injection process in CTS treatment. Our data-driven decision support tools can be used to help determine which patients to operate on in order to avoid the associated risks and expenses of surgery
    corecore