1,599 research outputs found

    Machine learning algorithms performed no better than regression models for prognostication in traumatic brain injury

    Get PDF
    Objective: We aimed to explore the added value of common machine learning (ML) algorithms for prediction of outcome for moderate and severe traumatic brain injury. Study Design and Setting: We performed logistic regression (LR), lasso regression, and ridge regression with key baseline predictors in the IMPACT-II database (15 studies, n = 11,022). ML algorithms included support vector machines, random forests, gradient boosting machines, and artificial neural networks and were trained using the same predictors. To assess generalizability of predictions, we performed internal, internal-external, and external validation on the recent CENTER-TBI study (patients with Glasgow Coma Scale <13, n = 1,554). Both calibration (calibration slope/intercept) and discrimination (area under the curve) was quantified. Results: In the IMPACT-II database, 3,332/11,022 (30%) died and 5,233(48%) had unfavorable outcome (Glasgow Outcome Scale less than 4). In the CENTER-TBI study, 348/1,554(29%) died and 651(54%) had unfavorable outcome. Discrimination and calibration varied widely between the studies and less so between the studied algorithms. The mean area under the curve was 0.82 for mortality and 0.77 for unfavorable outcomes in the CENTER-TBI study. Conclusion: ML algorithms may not outperform traditional regression approaches in a low-dimensional setting for outcome prediction after moderate or severe traumatic brain injury. Similar to regression-based prediction models, ML algorithms should be rigorously validated to ensure applicability to new populations

    GARMENT EMPLOYEE PRODUCTIVITY PREDICTION USING RANDOM FOREST

    Get PDF
    Clothing also means clothing is needed by humans. Besides the need for clothing in terms of function, clothing sales or business is also very potent. About 75 million people worldwide are directly involved in textiles, clothing, and footwear. In this case, a common problem in this industry is that the actual productivity of apparel employees sometimes fails to reach the productivity targets set by the authorities to meet production targets on time, resulting in huge losses. Experiments were conducted using the random forest model, linear regression, and neural network by looking for the values ​​of the correlation coefficient, MAE, and RMSE.&nbsp; This aims to predict the productivity of garment employees with data mining techniques that apply machine learning and look for the minimum MAE value. The results of testing the proposed algorithm on the garment worker productivity dataset obtained the smallest MAE, namely the random forest algorithm, namely 0.0787, linear regression 0.1081, and 0.1218 neural network

    Machine learning algorithms performed no better than regression models for prognostication in traumatic brain injury

    Get PDF
    Objective: We aimed to explore the added value of common machine learning (ML) algorithms for prediction of outcome for moderate and severe traumatic brain injury.Study Design and Setting: We performed logistic regression (LR), lasso regression, and ridge regression with key baseline predictors in the IMPACT-II database (15 studies, n = 11,022). ML algorithms included support vector machines, random forests, gradient boosting machines, and artificial neural networks and were trained using the same predictors. To assess generalizability of predictions, we performed internal, internal-external, and external validation on the recent CENTER-TBI study (patients with Glasgow Coma ScaleResults: In the IMPACT-II database, 3,332/11,022 (30%) died and 5,233(48%) had unfavorable outcome (Glasgow Outcome Scale less than 4). In the CENTER-TBI study, 348/1,554(29%) died and 651(54%) had unfavorable outcome. Discrimination and calibration varied widely between the studies and less so between the studied algorithms. The mean area under the curve was 0.82 for mortality and 0.77 for unfavorable outcomes in the CENTER-TBI study.Conclusion: ML algorithms may not outperform traditional regression approaches in a low-dimensional setting for outcome prediction after moderate or severe traumatic brain injury. Similar to regression-based prediction models, ML algorithms should be rigorously validated to ensure applicability to new populations. (C) 2020 The Authors. Published by Elsevier Inc.</p

    Machine learning algorithms performed no better than regression models for prognostication in traumatic brain injury

    Get PDF
    Objective: We aimed to explore the added value of common machine learning (ML) algorithms for prediction of outcome for moderate and severe traumatic brain injury. Study Design and Setting: We performed logistic regression (LR), lasso regression, and ridge regression with key baseline predictors in the IMPACT-II database (15 studies, n = 11,022). ML algorithms included support vector machines, random forests, gradient boosting machines, and artificial neural networks and were trained using the same predictors. To assess generalizability of predictions, we performed internal, internal-external, and external validation on the recent CENTER-TBI study (patients with Glasgow Coma Scale Results: In the IMPACT-II database, 3,332/11,022 (30%) died and 5,233(48%) had unfavorable outcome (Glasgow Outcome Scale less than 4). In the CENTER-TBI study, 348/1,554(29%) died and 651(54%) had unfavorable outcome. Discrimination and calibration varied widely between the studies and less so between the studied algorithms. The mean area under the curve was 0.82 for mortality and 0.77 for unfavorable outcomes in the CENTER-TBI study. Conclusion: ML algorithms may not outperform traditional regression approaches in a low-dimensional setting for outcome prediction after moderate or severe traumatic brain injury. Similar to regression-based prediction models, ML algorithms should be rigorously validated to ensure applicability to new populations. (C) 2020 The Authors. Published by Elsevier Inc.Peer reviewe

    Prognostic models in COVID-19 infection that predict severity: a systematic review.

    Get PDF
    Current evidence on COVID-19 prognostic models is inconsistent and clinical applicability remains controversial. We performed a systematic review to summarize and critically appraise the available studies that have developed, assessed and/or validated prognostic models of COVID-19 predicting health outcomes. We searched six bibliographic databases to identify published articles that investigated univariable and multivariable prognostic models predicting adverse outcomes in adult COVID-19 patients, including intensive care unit (ICU) admission, intubation, high-flow nasal therapy (HFNT), extracorporeal membrane oxygenation (ECMO) and mortality. We identified and assessed 314 eligible articles from more than 40 countries, with 152 of these studies presenting mortality, 66 progression to severe or critical illness, 35 mortality and ICU admission combined, 17 ICU admission only, while the remaining 44 studies reported prediction models for mechanical ventilation (MV) or a combination of multiple outcomes. The sample size of included studies varied from 11 to 7,704,171 participants, with a mean age ranging from 18 to 93 years. There were 353 prognostic models investigated, with area under the curve (AUC) ranging from 0.44 to 0.99. A great proportion of studies (61.5%, 193 out of 314) performed internal or external validation or replication. In 312 (99.4%) studies, prognostic models were reported to be at high risk of bias due to uncertainties and challenges surrounding methodological rigor, sampling, handling of missing data, failure to deal with overfitting and heterogeneous definitions of COVID-19 and severity outcomes. While several clinical prognostic models for COVID-19 have been described in the literature, they are limited in generalizability and/or applicability due to deficiencies in addressing fundamental statistical and methodological concerns. Future large, multi-centric and well-designed prognostic prospective studies are needed to clarify remaining uncertainties

    Synthetic Observational Health Data with GANs: from slow adoption to a boom in medical research and ultimately digital twins?

    Full text link
    After being collected for patient care, Observational Health Data (OHD) can further benefit patient well-being by sustaining the development of health informatics and medical research. Vast potential is unexploited because of the fiercely private nature of patient-related data and regulations to protect it. Generative Adversarial Networks (GANs) have recently emerged as a groundbreaking way to learn generative models that produce realistic synthetic data. They have revolutionized practices in multiple domains such as self-driving cars, fraud detection, digital twin simulations in industrial sectors, and medical imaging. The digital twin concept could readily apply to modelling and quantifying disease progression. In addition, GANs posses many capabilities relevant to common problems in healthcare: lack of data, class imbalance, rare diseases, and preserving privacy. Unlocking open access to privacy-preserving OHD could be transformative for scientific research. In the midst of COVID-19, the healthcare system is facing unprecedented challenges, many of which of are data related for the reasons stated above. Considering these facts, publications concerning GAN applied to OHD seemed to be severely lacking. To uncover the reasons for this slow adoption, we broadly reviewed the published literature on the subject. Our findings show that the properties of OHD were initially challenging for the existing GAN algorithms (unlike medical imaging, for which state-of-the-art model were directly transferable) and the evaluation synthetic data lacked clear metrics. We find more publications on the subject than expected, starting slowly in 2017, and since then at an increasing rate. The difficulties of OHD remain, and we discuss issues relating to evaluation, consistency, benchmarking, data modelling, and reproducibility.Comment: 31 pages (10 in previous version), not including references and glossary, 51 in total. Inclusion of a large number of recent publications and expansion of the discussion accordingl

    Towards Integration of Artificial Intelligence into Medical Devices as a Real-Time Recommender System for Personalised Healthcare:State-of-the-Art and Future Prospects

    Get PDF
    In the era of big data, artificial intelligence (AI) algorithms have the potential to revolutionize healthcare by improving patient outcomes and reducing healthcare costs. AI algorithms have frequently been used in health care for predictive modelling, image analysis and drug discovery. Moreover, as a recommender system, these algorithms have shown promising impacts on personalized healthcare provision. A recommender system learns the behaviour of the user and predicts their current preferences (recommends) based on their previous preferences. Implementing AI as a recommender system improves this prediction accuracy and solves cold start and data sparsity problems. However, most of the methods and algorithms are tested in a simulated setting which cannot recapitulate the influencing factors of the real world. This review article systematically reviews prevailing methodologies in recommender systems and discusses the AI algorithms as recommender systems specifically in the field of healthcare. It also provides discussion around the most cutting-edge academic and practical contributions present in the literature, identifies performance evaluation matrices, challenges in the implementation of AI as a recommender system, and acceptance of AI-based recommender systems by clinicians. The findings of this article direct researchers and professionals to comprehend currently developed recommender systems and the future of medical devices integrated with real-time recommender systems for personalized healthcare

    Advancing prognostic precision in pulmonary embolism: A clinical and laboratory-based artificial intelligence approach for enhanced early mortality risk stratification

    Get PDF
    Background Acute pulmonary embolism (PE) is a critical medical emergency that necessitates prompt identification and intervention. Accurate prognostication of early mortality is vital for recognizing patients at elevated risk for unfavourable outcomes and administering suitable therapy. Machine learning (ML) algorithms hold promise for enhancing the precision of early mortality prediction in PE patients. Objective To devise an ML algorithm for early mortality prediction in PE patients by employing clinical and laboratory variables. Methods This study utilized diverse oversampling techniques to improve the performance of various machine learning models including ANN, SVM, DT, RF, and AdaBoost for early mortality prediction. Appropriate oversampling methods were chosen for each model based on algorithm characteristics and dataset properties. Predictor variables included four lab tests, eight physiological time series indicators, and two general descriptors. Evaluation used metrics like accuracy, F1_score, precision, recall, Area Under the Curve (AUC) and Receiver Operating Characteristic (ROC) curves, providing a comprehensive view of models' predictive abilities. Results The findings indicated that the RF model with random oversampling exhibited superior performance among the five models assessed, achieving elevated accuracy and precision alongside high recall for predicting the death class. The oversampling approaches effectively equalized the sample distribution among the classes and enhanced the models' performance. Conclusions The suggested ML technique can efficiently prognosticate mortality in patients afflicted with acute PE. The RF model with random oversampling can aid healthcare professionals in making well-informed decisions regarding the treatment of patients with acute PE. The study underscores the significance of oversampling methods in managing imbalanced data and emphasizes the potential of ML algorithms in refining early mortality prediction for PE patients

    Artificial intelligence in cancer imaging: Clinical challenges and applications

    Get PDF
    Judgement, as one of the core tenets of medicine, relies upon the integration of multilayered data with nuanced decision making. Cancer offers a unique context for medical decisions given not only its variegated forms with evolution of disease but also the need to take into account the individual condition of patients, their ability to receive treatment, and their responses to treatment. Challenges remain in the accurate detection, characterization, and monitoring of cancers despite improved technologies. Radiographic assessment of disease most commonly relies upon visual evaluations, the interpretations of which may be augmented by advanced computational analyses. In particular, artificial intelligence (AI) promises to make great strides in the qualitative interpretation of cancer imaging by expert clinicians, including volumetric delineation of tumors over time, extrapolation of the tumor genotype and biological course from its radiographic phenotype, prediction of clinical outcome, and assessment of the impact of disease and treatment on adjacent organs. AI may automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection, management decisions on whether or not to administer an intervention, and subsequent observation to a yet to be envisioned paradigm. Here, the authors review the current state of AI as applied to medical imaging of cancer and describe advances in 4 tumor types (lung, brain, breast, and prostate) to illustrate how common clinical problems are being addressed. Although most studies evaluating AI applications in oncology to date have not been vigorously validated for reproducibility and generalizability, the results do highlight increasingly concerted efforts in pushing AI technology to clinical use and to impact future directions in cancer care
    corecore