2,136 research outputs found

    A CNN-LSTM for predicting mortality in the ICU

    Get PDF
    An accurate predicted mortality is crucial to healthcare as it provides an empirical risk estimate for prognostic decision making, patient stratification and hospital benchmarking. Current prediction methods in practice are severity of disease scoring systems that usually involve a fixed set of admission attributes and summarized physiological data. These systems are prone to bias and require substantial manual effort which necessitates an updated approach which can account for most shortcomings. Clinical observation notes allow for recording highly subjective data on the patient that can possibly facilitate higher discrimination. Moreover, deep learning models can automatically extract and select features without human input.This thesis investigates the potential of a combination of a deep learning model and notes for predicting mortality with a higher accuracy. A custom architecture, called CNN-LSTM, is conceptualized for mapping multiple notes compiled in a hospital stay to a mortality outcome. It employs both convolutional and recurrent layers with the former capturing semantic relationships in individual notes independently and the latter capturing temporal relationships between concurrent notes in a hospital stay. This approach is compared to three severity of disease scoring systems with a case study on the MIMIC-III dataset. Experiments are set up to assess the CNN-LSTM for predicting mortality using only the notes from the first 24, 12 and 48 hours of a patient stay. The model is trained using K-fold cross-validation with k=5 and the mortality probability calculated by the three severity scores on the held-out set is used as the baseline. It is found that the CNN-LSTM outperforms the baseline on all experiments which serves as a proof-of-concept of how notes and deep learning can better outcome prediction

    Machine Learning Techniques for Lung Cancer Risk Prediction using Text Dataset

    Get PDF
    The early symptoms of lung cancer, a serious threat to human health, are comparable to those of the common cold and bronchitis. Clinical professionals can use machine learning techniques to customize screening and prevention strategies to the unique needs of each patient, potentially saving lives and enhancing patient care. Researchers must identify linked clinical and demographic variables from patient records and further pre-process and prepare the dataset for training a machine-learning model in order to properly predict the development of lung cancer. The goal of the study is to develop a precise and understandable machine learning (ML) model for early lung cancer prediction utilizing demographic and clinical variables, as well as to contribute to the growing field of medical research ML application that may improve healthcare outcomes. In order to create the most effective and precise predictive model, machine learning techniques like Logistic Regression, Decision Tree, Random Forest, Support Vector Machine, K-Nearest Neighbor (KNN), and Naive Bayes were utilized in this article

    Drug-drug interaction extraction-based system: an natural language processing approach

    Get PDF
    Poly-medicated patients, especially those over 65, have increased. Multiple drug use and inappropriate prescribing increase drug-drug interactions, adverse drug reactions, morbidity, and mortality. This issue was addressed with recommendation systems. Health professionals have not followed these systems due to their poor alert quality and incomplete databases. Recent research shows a growing interest in using Text Mining via NLP to extract drug-drug interactions from unstructured data sources to support clinical prescribing decisions. NLP text mining and machine learning classifier training for drug relation extraction were used in this process. In this context, the proposed solution allows to develop an extraction system for drug-drug interactions from unstructured data sources. The system produces structured information, which can be inserted into a database that contains information acquired from three different data sources. The architecture outlined for the drug-drug interaction extraction system is capable of receiving unstructured text, identifying drug entities sentence by sentence, and determining whether or not there are interactions between them.- Fundacao para a Ciencia e a Tecnologi

    Toxicological profile for thorium

    Get PDF
    cdc:6208CAS#: 7440-29-1Version history: September 2019, Update of data in Chapters 2, 3, and 7; October 2014, Addendum to the toxicological profile, October 1990; Final toxicological profile released.Reference: Agency for Toxic Substances and Disease Registry (ATSDR). 2019. Toxicological profile for Silica. Atlanta, GA: U.S. Department of Health and Human Services, Public Health Service.CS274127-Atp147.pdf2019671

    Machine Learning for Benchmarking Critical Care Outcomes

    Get PDF
    Objectives Enhancing critical care efficacy involves evaluating and improving system functioning. Benchmarking, a retrospective comparison of results against standards, aids risk-adjusted assessment and helps healthcare providers identify areas for improvement based on observed and predicted outcomes. The last two decades have seen the development of several models using machine learning (ML) for clinical outcome prediction. ML is a field of artificial intelligence focused on creating algorithms that enable computers to learn from and make predictions or decisions based on data. This narrative review centers on key discoveries and outcomes to aid clinicians and researchers in selecting the optimal methodology for critical care benchmarking using ML. Methods We used PubMed to search the literature from 2003 to 2023 regarding predictive models utilizing ML for mortality (592 articles), length of stay (143 articles), or mechanical ventilation (195 articles). We supplemented the PubMed search with Google Scholar, making sure relevant articles were included. Given the narrative style, papers in the cohort were manually curated for a comprehensive reader perspective. Results Our report presents comparative results for benchmarked outcomes and emphasizes advancements in feature types, preprocessing, model selection, and validation. It showcases instances where ML effectively tackled critical care outcome-prediction challenges, including nonlinear relationships, class imbalances, missing data, and documentation variability, leading to enhanced results. Conclusions Although ML has provided novel tools to improve the benchmarking of critical care outcomes, areas that require further research include class imbalance, fairness, improved calibration, generalizability, and long-term validation of published models

    HOLMeS: eHealth in the Big Data and Deep Learning Era

    Get PDF
    Now, data collection and analysis are becoming more and more important in a variety of application domains, as long as novel technologies advance. At the same time, we are experiencing a growing need for human–machine interaction with expert systems, pushing research toward new knowledge representation models and interaction paradigms. In particular, in the last few years, eHealth—which usually indicates all the healthcare practices supported by electronic elaboration and remote communications—calls for the availability of a smart environment and big computational resources able to offer more and more advanced analytics and new human–computer interaction paradigms. The aim of this paper is to introduce the HOLMeS (health online medical suggestions) system: A particular big data platform aiming at supporting several eHealth applications. As its main novelty/functionality, HOLMeS exploits a machine learning algorithm, deployed on a cluster-computing environment, in order to provide medical suggestions via both chat-bot and web-app modules, especially for prevention aims. The chat-bot, opportunely trained by leveraging a deep learning approach, helps to overcome the limitations of a cold interaction between users and software, exhibiting a more human-like behavior. The obtained results demonstrate the effectiveness of the machine learning algorithms, showing an area under ROC (receiver operating characteristic) curve (AUC) of 74.65% when some first-level features are used to assess the occurrence of different chronic diseases within specific prevention pathways. When disease-specific features are added, HOLMeS shows an AUC of 86.78%, achieving a greater effectiveness in supporting clinical decisions

    Development of the minimally invasive paediatric & perinatal autopsy

    Get PDF
    Introduction Perinatal autopsy contributes useful clinical information to patient management in approximately 40% of cases but remains poorly accepted due to parental concerns regarding disfigurement. Post-mortem imaging is an alternative, but 1.5 T MRI lacks resolution below 18 gestational weeks. Additionally, the Royal College of Pathologists autopsy guidelines recommend extensive tissue sampling as part of the investigation of fetal loss, which imaging alone cannot provide. Possible mitigating strategies include micro-CT for phenotyping small fetuses and laparoscopic techniques to obtain tissue samples. Interrogation of the evidence base for tissue sampling in different clinical scenarios is necessary to develop evidence-based practice and recommendations. Methods Minimally Invasive Autopsy with Laparoscopy (MinImAL) was performed in 103 cases. Micro-CT was optimised in extracted organs and the diagnostic accuracy evaluated in 20 fetuses. The Great Ormond Street Autopsy Database was retrospectively interrogated to investigate the yield of internal examination and visceral histology to the cause of death in 5,311 cases. Results MinImAL examination is reliable (97.8% successfully completed, 91/93) with good tissue sampling success rates (100% in lung, kidney, heart). Micro-CT offers an accurate method of scanning small fetuses (97.5% agreement with autopsy, 95% CI, 96.6-98.4) with fewer non-diagnostic indices than standard autopsy in < 14 weeks gestation (22/440 vs 48/348 respectively; p<0.001). Histology of macroscopically normal viscera is valuable in the investigation of infant and childhood deaths. However, it provides almost no useful information relevant to cause of death or main diagnosis (<1%) in fetal cases. Conclusions MinImAL examination offers a reliable method of internal examination and tissue sampling, which may be acceptable when standard autopsy is declined. Micro-CT provides an accurate, non-invasive method for phenotyping early gestation fetal anatomy. Histological sampling of macroscopically normal visceral organs is valuable when investigating infant or child deaths but of limited value in fetal loss and hence should not be routinely performed

    When AIs Outperform Doctors: Confronting the Challenges of a Tort-Induced Over-Reliance on Machine Learning

    Get PDF
    Someday, perhaps soon, diagnostics generated by machine learning (ML) will have demonstrably better success rates than those generated by human doctors. What will the dominance of ML diagnostics mean for medical malpractice law, for the future of medical service provision, for the demand for certain kinds of doctors, and in the long run for the quality of medical diagnostics itself? This Article argues that once ML diagnosticians, such as those based on neural networks, are shown to be superior, existing medical malpractice law will require superior ML-generated medical diagnostics as the standard of care in clinical settings. Further, unless implemented carefully, a physician\u27s duty to use ML systems in medical diagnostics could, paradoxically, undermine the very safety standard that malpractice law set out to achieve. Although at first doctor + machine may be more effective than either alone because humans and ML systems might make very different kinds of mistakes, in time, as ML systems improve, effective ML could create overwhelming legal and ethical pressure to delegate the diagnostic process to the machine. Ultimately, a similar dynamic might extend to treatment also. If we reach the point where the bulk of clinical outcomes collected in databases are ML-generated diagnoses, this may result in future decisions that are not easily audited or understood by human doctors. Given the well-documented fact that treatment strategies are often not as effective when deployed in clinical practice compared to preliminary evaluation, the lack of transparency introduced by the ML algorithms could lead to a decrease in quality of care. This Article describes salient technical aspects of this scenario particularly as it relates to diagnosis and canvasses various possible technical and legal solutions that would allow us to avoid these unintended consequences of medical malpractice law. Ultimately, we suggest there is a strong case for altering existing medical liability rules to avoid a machine-only diagnostic regime. We argue that the appropriate revision to the standard of care requires maintaining meaningful participation in the loop by physicians the loop
    • …
    corecore