4 research outputs found

    ICU Patients’ Pattern Recognition and Correlation Identification of Vital Parameters Using Optimized Machine Learning Models

    Get PDF
    Early detection of patient deterioration in the Intensive Care Unit (ICU) can play a crucial role in improving patient outcomes. Conventional severity scales currently used to predict patient deterioration are based on a number of factors, the majority of which consist of multiple investigations. Recent advancements in machine learning (ML) within the healthcare domain offer the potential to alleviate the burden of continuous patient monitoring. In this study, we propose an optimized ML model designed to leverage variations in vital signs observed during the final 24 hours of an ICU stay for outcome predictions. Further, we elucidate the relative contributions of distinct vital parameters to these outcomes The dataset compiled in real-time encompasses six pivotal vital parameters: systolic (0) and diastolic (1) blood pressure, pulse rate (2), respiratory rate (3), oxygen saturation (SpO2) (4), and temperature (5). Of these vital parameters, systolic blood pressure emerges as the most significant predictor associated with mortality prediction. Using a fivefold cross-validation method, several ML classifiers are used to categorize the last 24 hours of time series data after ICU admission into three groups: recovery, death, and intubation. Notably, the optimized Gradient Boosting classifier exhibited the highest performance in detecting mortality, achieving an area under the receiver-operator curve (AUC) of 0.95. Through the integration of electronic health records with this ML software, there is the promise of early notifications regarding adverse outcomes, potentially several hours before the onset of hemodynamic instability

    An empirical study of preprocessing techniques with convolutional neural networks for accurate detection of chronic ocular diseases using fundus images

    No full text
    [[abstract]]Chronic Ocular Diseases (COD) such as myopia, diabetic retinopathy, age-related macular degeneration, glaucoma, and cataract can affect the eye and may even lead to severe vision impairment or blindness. According to a recent World Health Organization (WHO) report on vision, at least 2.2 billion individuals worldwide suffer from vision impairment. Often, overt signs indicative of COD do not manifest until the disease has progressed to an advanced stage. However, if COD is detected early, vision impairment can be avoided by early intervention and cost-effective treatment. Ophthalmologists are trained to detect COD by examining certain minute changes in the retina, such as microaneurysms, macular edema, hemorrhages, and alterations in the blood vessels. The range of eye conditions is diverse, and each of these conditions requires a unique patient-specific treatment. Convolutional neural networks (CNNs) have demonstrated significant potential in multi-disciplinary fields, including the detection of a variety of eye diseases. In this study, we combined several preprocessing approaches with convolutional neural networks to accurately detect COD in eye fundus images. To the best of our knowledge, this is the first work that provides a qualitative analysis of preprocessing approaches for COD classification using CNN models. Experimental results demonstrate that CNNs trained on the region of interest segmented images outperform the models trained on the original input images by a substantial margin. Additionally, an ensemble of three preprocessing techniques outperformed other state-of-the-art approaches by 30% and 3%, in terms of Kappa and F 1 scores, respectively. The developed prototype has been extensively tested and can be evaluated on more comprehensive COD datasets for deployment in the clinical setup

    Automatic Facial Expression Recognition Using DCNN

    Get PDF
    AbstractFace depicts a wide range of information about identity, age, sex, race as well as emotional and mental state. Facial expressions play crucial role in social interactions and commonly used in the behavioral interpretation of emotions. Automatic facial expression recognition is one of the interesting and challenging problem in computer vision due to its potential applications such as Human Computer Interaction(HCI), behavioral science, video games etc.In this paper, a novel method for automatically recognizing facial expressions using Deep Convolutional Neural Network(DCNN) features is proposed. The proposed model focuses on recognizing the facial expressions of an individual from a single image. The feature extraction time is significantly reduced due to the usage of general purpose graphic processing unit (GPGPU). From an evaluation on two publicly available facial expression datasets, we have found that using DCNN features, we can achieve the state-of-the-art recognition rate

    Multi-Scale Convolutional Neural Network for Accurate Corneal Segmentation in Early Detection of Fungal Keratitis

    No full text
    Microbial keratitis is an infection of the cornea of the eye that is commonly caused by prolonged contact lens wear, corneal trauma, pre-existing systemic disorders and other ocular surface disorders. It can result in severe visual impairment if improperly managed. According to the latest World Vision Report, at least 4.2 million people worldwide suffer from corneal opacities caused by infectious agents such as fungi, bacteria, protozoa and viruses. In patients with fungal keratitis (FK), often overt symptoms are not evident, until an advanced stage. Furthermore, it has been reported that clear discrimination between bacterial keratitis and FK is a challenging process even for trained corneal experts and is often misdiagnosed in more than 30% of the cases. However, if diagnosed early, vision impairment can be prevented through early cost-effective interventions. In this work, we propose a multi-scale convolutional neural network (MS-CNN) for accurate segmentation of the corneal region to enable early FK diagnosis. The proposed approach consists of a deep neural pipeline for corneal region segmentation followed by a ResNeXt model to differentiate between FK and non-FK classes. The model trained on the segmented images in the region of interest, achieved a diagnostic accuracy of 88.96%. The features learnt by the model emphasize that it can correctly identify dominant corneal lesions for detecting FK
    corecore