8 research outputs found

    Unleashing Modified Deep Learning Models in Efficient COVID19 Detection

    Full text link
    The COVID19 pandemic, a unique and devastating respiratory disease outbreak, has affected global populations as the disease spreads rapidly. Recent Deep Learning breakthroughs may improve COVID19 prediction and forecasting as a tool of precise and fast detection, however, current methods are still being examined to achieve higher accuracy and precision. This study analyzed the collection contained 8055 CT image samples, 5427 of which were COVID cases and 2628 non COVID. The 9544 Xray samples included 4044 COVID patients and 5500 non COVID cases. The most accurate models are MobileNet V3 (97.872 percent), DenseNet201 (97.567 percent), and GoogleNet Inception V1 (97.643 percent). High accuracy indicates that these models can make many accurate predictions, as well as others, are also high for MobileNetV3 and DenseNet201. An extensive evaluation using accuracy, precision, and recall allows a comprehensive comparison to improve predictive models by combining loss optimization with scalable batch normalization in this study. Our analysis shows that these tactics improve model performance and resilience for advancing COVID19 prediction and detection and shows how Deep Learning can improve disease handling. The methods we suggest would strengthen healthcare systems, policymakers, and researchers to make educated decisions to reduce COVID19 and other contagious diseases. CCS CONCEPTS Covid,Deep Learning, Image Processing KEYWORDS Covid, Deep Learning, DenseNet201, MobileNet, ResNet, DenseNet, GoogleNet, Image Processing, Disease Detection

    Identifying Long-Term Deposit Customers : A Machine Learning Approach

    Get PDF
    Majority of the revenue from the banking sector is usually generated from long term deposits by customers. It is for banks to understand customer characteristics to increase product sales. To aid this, marketing strategies are employed to target potential customers and let them interact with the banks directly, generating a large amount of data on customer characteristics and demographics. In recent years, it has been discovered that various data analysis, feature selection and machine learning techniques can be employed to analyze customer characteristics as well as variables that can impact customer decision significantly. These methods can be used to identify consumers in different categories to predict whether a customer would subscribe to a long-term deposit, allowing the marketing strategy to be more successful. In this study, we have taken a R programming approach to analyze financial transaction data to gain insight into how business processes can be improved using data mining techniques to find interesting trends and make more data-driven decisions. We have used statistical analysis like Exploratory Data Analysis (EDA), Principal Component Analysis (PCA), Factor Analysis and Correlations in the given data set. Besides, the study's goal is to use at least three typical classification algorithms among Logistic Regression, Random Forest, Support Vector Machine and K-nearest neighbors, and then make predictive models around customers signing up for long term deposits. Where we have gotten best accuracy from Logistic Regression which is 90.64 % as well the sensitivity is 99.05 %. Results were analyzed using the accuracy, sensitivity, and specificity score of these algorithms.acceptedVersionPeer reviewe

    CBD2023: A Hypercomplex Bangla Handwriting Character Recognition Data for Hierarchical Class Expansion

    No full text
    Object recognition technology has made significant strides, but recognizing handwritten Bangla characters (including symbols, compound forms, etc.) remains a challenging problem due to the prevalence of cursive writing and many ambiguous characters. The complexity and variability of the Bangla script and individual's unique handwriting styles make it difficult to achieve satisfactory performance for practical applications, and the best existing recognizers are far less effective than those developed for English alpha-numeric characters. Compared to other major languages, there are limited options for recognizing handwritten Bangla characters. This research has described a new dataset to improve the accuracy and effectiveness of handwriting recognition systems for the Bengali language spoken by over 200 million people worldwide. This dataset aims to investigate and recognize Bangla handwritten characters, focusing on enlarging the recognized character classes. To achieve this, a new challenging dataset for handwriting recognition is introduced, collected from numerous students' handwriting from two institutions

    Mining Significant Features of Diabetes through Employing Various Classification Methods

    No full text
    Diabetes is a chronic disease that occurs when blood glucose becomes very high. It is responsible for a number of serious complications in an affected patients body. However, early detection of this harmful disease can reduce critical situations like death as well as minimize the chance of losing valuable organs due to this disease. The aim of this study is to construct a predictive model through examining several machine learning techniques namely Decision tree, K Nearest Neighbour, Naive Bayes, Support Vector Machine, Logistic Regression, extreme Gradient Boosting, Multi-Layer Perceptron and Random Forest on two different datasets of diabetes patients namely Pima Indian diabetes datasets and Sylhet Diabetes Hospital datasets. Several popular and effective feature subset selection procedures have also been utilized for eliminating unnecessary attributes. After analyzing the outputs of the work, it is seen that Random Forest delivers the highest accuracy (97.5%), F-measure (97.5%), Area under Receiver Operating Characteristic Curve (99.80%) for the Gain Ratio Attribute Evaluation feature subset selection technique in case of Sylhet hospital datasets. On the other hand, in case of Pima Indian datasets, Logistic Regression delivers the highest accuracy (77.7%), F-measure (77%) for Information Gain Attribute Evaluation and Area under Receiver Operating Curve (83%) for both of the techniques namely Correlation-based Feature Selection Subset Evaluation and Correlation Attribute Evaluation. However, In this study, 10 fold cross validation technique has been used for the performance measurement. </p

    Unleashing Modified Deep Learning Models in Efficient COVID19 Detection

    No full text
    The COVID19 pandemic, a unique and devastating respiratory disease outbreak, has affected global populations as the disease spreads rapidly. Recent Deep Learning breakthroughs may improve COVID19 prediction and forecasting as a tool of precise and fast detection, however, current methods are still being examined to achieve higher accuracy and precision. This study analyzed the collection contained 8055 CT image samples, 5427 of which were COVID cases and 2628 non COVID. The 9544 Xray samples included 4044 COVID patients and 5500 non COVID cases. The most accurate models are MobileNet V3 (97.872 percent), DenseNet201 (97.567 percent), and GoogleNet Inception V1 (97.643 percent). High accuracy indicates that these models can make many accurate predictions, as well as others, are also high for MobileNetV3 and DenseNet201. An extensive evaluation using accuracy, precision, and recall allows a comprehensive comparison to improve predictive models by combining loss optimization with scalable batch normalization in this study. Our analysis shows that these tactics improve model performance and resilience for advancing COVID19 prediction and detection and shows how Deep Learning can improve disease handling. The methods we suggest would strengthen healthcare systems, policymakers, and researchers to make educated decisions to reduce COVID19 and other contagious diseases

    Fanconi Anemia: Examining Guidelines for Testing All Patients with Hand Anomalies Using a Machine Learning Approach

    No full text
    Background: This study investigated the questionable necessity of genetic testing for Fanconi anemia in children with hand anomalies. The current UK guidelines suggest that every child with radial ray dysplasia or a thumb anomaly should undergo further cost intensive investigation for Fanconi anemia. In this study we reviewed the numbers of patients and referral patterns, as well as the financial and service provision implications UK guidelines provide. Methods: Over three years, every patient with thumb or radial ray anomaly referred to our service was tested for Fanconi Anemia. CART Analysis and machine learning techniques using Waikato Environment for Knowledge Analysis were applied to evaluate single clinical features predicting Fanconi anemia. Results: Youden Index and Predictive Summary Index (PSI) scores suggested no clinical significance of hand anomalies associated with Fanconi anemia. CART Analysis and attribute evaluation with Waikato Environment for Knowledge Analysis (WEKA) showed no single feature predictive for Fanconi anemia. Furthermore, none of the positive Fanconi anemia patients in this study had an isolated upper limb anomaly without presenting other features of Fanconi anemia. Conclusion: As a conclusion, this study does not support Fanconi anemia testing for isolated hand abnormalities in the absence of other features associated with this blood disease

    ML-CKDP: Machine learning-based chronic kidney disease prediction with smart web application

    No full text
    Chronic kidney diseases (CKDs) are a significant public health issue with potential for severe complications such as hypertension, anemia, and renal failure. Timely diagnosis is crucial for effective management. Leveraging machine learning within healthcare offers promising advancements in predictive diagnostics. In this paper, we developed a machine learning-based kidney diseases prediction (ML‐CKDP) model with dual objectives: to enhance dataset preprocessing for CKD classification and to develop a web-based application for CKD prediction. The proposed model involves a comprehensive data preprocessing protocol, converting categorical variables to numerical values, imputing missing data, and normalizing via Min-Max scaling. Feature selection is executed using a variety of techniques including Correlation, Chi-Square, Variance Threshold, Recursive Feature Elimination, Sequential Forward Selection, Lasso Regression, and Ridge Regression to refine the datasets. The model employs seven classifiers: Random Forest (RF), AdaBoost (AdaB), Gradient Boosting (GB), XgBoost (XgB), Naive Bayes (NB), Support Vector Machine (SVM), and Decision Tree (DT), to predict CKDs. The effectiveness of the models is assessed by measuring their accuracy, analyzing confusion matrix statistics, and calculating the Area Under the Curve (AUC) specifically for the classification of positive cases. Random Forest (RF) and AdaBoost (AdaB) achieve a 100% accuracy rate, evident across various validation methods including data splits of 70:30, 80:20, and K-Fold set to 10 and 15. RF and AdaB consistently reach perfect AUC scores of 100% across multiple datasets, under different splitting ratios. Moreover, Naive Bayes (NB) stands out for its efficiency, recording the lowest training and testing times across all datasets and split ratios. Additionally, we present a real-time web-based application to operationalize the model, enhancing accessibility for healthcare practitioners and stakeholders.Web app link: https://rajib-research-kedney-diseases-prediction.onrender.com

    Machine Learning-Based Rainfall Prediction: Unveiling Insights and Forecasting for Improved Preparedness

    No full text
    Rainfall prediction plays a crucial role in raising awareness about the potential dangers associated with rain and enabling individuals to take proactive measures for their safety. This study aims to utilize machine learning algorithms to accurately predict rainfall, considering the significant impact of scarcity or extreme rainfall on both rural and urban life. The complex nature of rainfall, influenced by various atmospheric, oceanic, and geographical factors, makes it a challenging phenomenon to forecast. This research employs data preprocessing techniques, outlier analysis, correlation analysis, feature selection, and several machine learning algorithms such as Naive Bayes (NB), Decision Tree, Support Vector Machine (SVM), Random Forest, and Logistic Regression. The study focuses on developing the most accurate rainfall prediction model by utilizing machine learning and feature selection techniques. The Artificial Neural Network (ANN) achieves a maximum accuracy of 90&#x0025; and 91&#x0025; before and after feature selection, respectively. Furthermore, k-means clustering and Principal Component Analysis (PCA) are applied to examine regional rainfall patterns in Australia. Lastly, to make our proposed machine learning simpler and more usable for general people, we have formulated a web-based application system using Flask in our research paper. Overall, this research demonstrates the effectiveness of different machine-learning techniques in predicting rainfall using Australian weather data
    corecore