1,414 research outputs found

    Discovering Hidden Signs and Symptoms of Heart Failure in the Electronic Health Record Using the Omaha System

    Get PDF
    Purpose/Background/Significance: For the past 30 years, heart failure has been in the top 3 readmission diagnoses with patients discharged to community care. This is costly to the healthcare system and negatively impacts the patient’s quality of life. The purpose of this study is to evaluate a community care database to determine if previously under-considered latent variables exist that could provide early detection of heart failure signs and symptoms. Theoretical/Conceptual Framework: The theoretical and conceptual frameworks surrounding this work are the Omaha System and Donabedian’s structure, process, and outcomes theory for healthcare quality improvement supported by Neuman’s Systems Model. The Omaha System was constructed on the combined basis of these theoretical underpinnings by three components: The Problem Classification Scheme, The Intervention Scheme, and The Problem Rating Scale for Outcomes. Methods: This study was a retrospective, descriptive, observational, comparative study using secondary data. Major HF-associated signs and symptoms related to problems of circulation and respiration were queried. Latent Class Analysis (LCA) was used to identify if other significant groupings of signs and symptoms were associated with heart failure signs and symptoms. Findings: Evaluation of the sample for signs and symptoms of HF related to the Omaha System Problems of Respiration and Circulation revealed 4215 individuals. LCA revealed four significant groupings of signs and symptoms related to the problems of Mental health, Cognition, Heart failure and General/Other. Further analysis determined that the HF group had the most interventions and visits yet had the lowest change in Knowledge, Behavior, and Status scores indicating that HF required intensive outpatient care to maintain their status in the community care environment without benefiting from significant final status improvement. Analysis revealed that patients with Cognition group benefited the most from increased visits and interventions. Conclusion: Patients exhibiting signs and symptoms of heart failure may also experience signs and symptoms of Mental health and Cognition changes, which may either contribute to heart failure exacerbation, or be as a result of the heart failure disease process. Further research is needed to examine possible mechanisms that may help defer HF exacerbations

    Diagnostic accuracy of clinical outcome prediction using nursing data in intensive care patients: A systematic review

    Get PDF
    Background: Nursing data consist of observations of patients' conditions and information on nurses' clinical judgment based on critically ill patients' behavior and physiological signs. Nursing data in electronic health records were recently emphasized as important predictors of patients' deterioration but have not been systematically reviewed. Objective: We conducted a systematic review of prediction models using nursing data for clinical outcomes, such as prolonged hospital stay, readmission, and mortality in intensive care patients, compared to physiological data only. In addition, the type of nursing data used in prediction model developments was investigated. Design: A systematic review. Methods: PubMed, CINAHL, Cochrane CENTRAL, EMBASE, IEEE Xplore Digital Library, Web of Science, and Scopus were searched. Clinical outcome prediction models using nursing data for intensive care patients were included. Clinical outcomes were prolonged hospital stay, readmission, and mortality. Data were extracted from selected studies such as study design, data source, outcome definition, sample size, predictors, reference test, model development, model performance, and evaluation. The risk of bias and applicability was assessed using the Prediction model Risk of Bias Assessment Tool checklist. Descriptive summaries were produced based on paired forest plots and summary receiver operating characteristic curves. Results: Sixteen studies were included in the systematic review. The data types of predictors used in prediction models were categorized as physiological data, nursing data, and clinical notes. The types of nursing data consisted of nursing notes, assessments, documentation frequency, and flowsheet comments. The studies using physiological data as a reference test showed higher predictive performance in combined data or nursing data than in physiological data. The overall risk of bias indicated that most of the included studies have a high risk. Conclusions: This study was conducted to identify and review the diagnostic accuracy of clinical outcome prediction using nursing data in intensive care patients. Most of the included studies developed models using nursing notes, and other studies used nursing assessments, documentation frequency, and flowsheet comments. Although the findings need careful interpretation due to the high risk of bias, the area under the curve scores of nursing data and combined data were higher than physiological data alone. It is necessary to establish a strategy in prediction modeling to utilize nursing data, clinical notes, and physiological data as predictors, considering the clinical context rather than physiological data alone. Registration: The protocol for this study is registered with PROSPERO (registration number: CRD42021273319). © 2022 The Authorsope

    Potential uses of AI for perioperative nursing handoffs: A qualitative study

    Get PDF
    OBJECTIVE: Situational awareness and anticipatory guidance for nurses receiving a patient after surgery are keys to patient safety. Little work has defined the role of artificial intelligence (AI) to support these functions during nursing handoff communication or patient assessment. We used interviews to better understand how AI could work in this context. MATERIALS AND METHODS: Eleven nurses participated in semistructured interviews. Mixed inductive-deductive thematic analysis was used to extract major themes and subthemes around roles for AI supporting postoperative nursing. RESULTS: Five themes were generated from the interviews: (1) nurse understanding of patient condition guides care decisions, (2) handoffs are important to nurse situational awareness, but multiple barriers reduce their effectiveness, (3) AI may address barriers to handoff effectiveness, (4) AI may augment nurse care decision making and team communication outside of handoff, and (5) user experience in the electronic health record and information overload are likely barriers to using AI. Important subthemes included that AI-identified problems would be discussed at handoff and team communications, that AI-estimated elevated risks would trigger patient re-evaluation, and that AI-identified important data may be a valuable addition to nursing assessment. DISCUSSION AND CONCLUSION: Most research on postoperative handoff communication relies on structured checklists. Our results suggest that properly designed AI tools might facilitate postoperative handoff communication for nurses by identifying specific elevated risks faced by a patient, triggering discussion on those topics. Limitations include a single center, many participants lacking of applied experience with AI, and limited participation rate

    Extractive Summarization : Experimental work on nursing notes in Finnish

    Get PDF
    Natural Language Processing (NLP) is a subfield of artificial intelligence and linguistics that is concerned with how a computer machine interacts with human language. With the increasing computational power and the advancement in technologies, researchers have been successful at proposing various NLP tasks that have already been implemented as real-world applications today. Automated text summarization is one of the many tasks that has not yet completely matured particularly in health sector. A success in this task would enable healthcare professionals to grasp patient's history in a minimal time resulting in faster decisions required for better care. Automatic text summarization is a process that helps shortening a large text without sacrificing important information. This could be achieved by paraphrasing the content known as the abstractive method or by concatenating relevant extracted sentences namely the extractive method. In general, this process requires the conversion of text into numerical form and then a method is executed to identify and extract relevant text. This thesis is an attempt of exploring NLP techniques used in extractive text summarization particularly in health domain. The work includes a comparison of basic summarizing models implemented on a corpus of patient notes written by nurses in Finnish language. Concepts and research studies required to understand the implementation have been documented along with the description of the code. A python-based project is structured to build a corpus and execute multiple summarizing models. For this thesis, we observe the performance of two textual embeddings namely Term Frequency - Inverse Document Frequency (TF-IDF) which is based on simple statistical measure and Word2Vec which is based on neural networks. For both models, LexRank, an unsupervised stochastic graph-based sentence scoring algorithm, is used for sentence extraction and a random selection method is used as a baseline method for evaluation. To evaluate and compare the performance of models, summaries of 15 patient care episodes of each model were provided to two human beings for manual evaluations. According to the results of the small sample dataset, we observe that both evaluators seem to agree with each other in preferring summaries produced by Word2Vec LexRank over the summaries generated by TF-IDF LexRank. Both models have also been observed, by both evaluators, to perform better than the baseline model of random selection

    Challenges and opportunities beyond structured data in analysis of electronic health records

    Get PDF
    Electronic health records (EHR) contain a lot of valuable information about individual patients and the whole population. Besides structured data, unstructured data in EHRs can provide extra, valuable information but the analytics processes are complex, time-consuming, and often require excessive manual effort. Among unstructured data, clinical text and images are the two most popular and important sources of information. Advanced statistical algorithms in natural language processing, machine learning, deep learning, and radiomics have increasingly been used for analyzing clinical text and images. Although there exist many challenges that have not been fully addressed, which can hinder the use of unstructured data, there are clear opportunities for well-designed diagnosis and decision support tools that efficiently incorporate both structured and unstructured data for extracting useful information and provide better outcomes. However, access to clinical data is still very restricted due to data sensitivity and ethical issues. Data quality is also an important challenge in which methods for improving data completeness, conformity and plausibility are needed. Further, generalizing and explaining the result of machine learning models are important problems for healthcare, and these are open challenges. A possible solution to improve data quality and accessibility of unstructured data is developing machine learning methods that can generate clinically relevant synthetic data, and accelerating further research on privacy preserving techniques such as deidentification and pseudonymization of clinical text

    A novel and reliable framework of patient deterioration prediction in Intensive Care Unit based on long short-term memory-recurrent neural network

    Get PDF
    The clinical investigation explored that early recognition and intervention are crucial for preventing clinical deterioration in patients in Intensive Care units (ICUs). Deterioration of patients is predictable and can be preventable if early risk factors are recognized and developed in the clinical setting. Timely detection of deterioration in ICU patients may also lead to better health management. In this paper, a new model was proposed based on Long Short-Term Memory-Recurrent Neural Network (LSTM-RNN) to predict deterioration of ICU patients. An optimisation model based on a modified genetic algorithm (GA) has also been proposed in this study to optimize the observation window, prediction window, and the number of neurons in hidden layers to increase accuracy, AUROC, and minimize test loss. The experimental results demonstrate that the prediction model proposed in this study acquired a significantly better classification performance compared with many other studies that used deep learning models in their works. Our proposed model was evaluated for two tasks: mortality and sudden transfer of patients to ICU. Our results show that the proposed model could predict deterioration before one hour of onset and outperforms other models. In this study, the proposed predictive model is implemented using the state-of-the-art graphical processing unit (GPU) virtual machine provided by Google Colaboratory. Moreover, the study uses a novel time-series approach, which is minute-by-minute. This novel approach enables the proposed model to obtain highly accurate results (i.e., an AUROC of 0.933 and an accuracy of 0.921). This study utilizes the individual and combined effectiveness of different types of variables (i.e., vital signs, laboratory measurements, GCS, and demographic data). In this study, data was extracted from MIMIC-III database. The ad-hoc frameworks proposed by previous studies can be improved by the novel and reliable prediction framework proposed in this research, which will result in predictions of more accurate performance. The proposed predictive model could reduce the required observation window (i.e., a reduction of 83%) for the prediction task while improving the performance. In fact, the proposed significant small size of observation window could obtain higher results which outperformed all previous works that utilize different sizes of observation window (i.e., 48 hours and 24 hours). Moreover, this research demonstrates the ability of the proposed predictive model to achieve accurate results (>80%) on 'raw' data in an experimental work. This shows that the rule-based pre-processing of clinical features is unnecessary for deep learning predictive models

    Text mining patient experiences from online health communities

    Get PDF
    Social media has had an impact on how patients experience healthcare. Through online channels, patients are sharing information and their experiences with potentially large audiences all over the world. While sharing in this way may offer immediate benefits to themselves and their readership (e.g. other patients) these unprompted, self-authored accounts of illness are also an important resource for healthcare researchers. They offer unprecedented insight into understanding patients’experience of illness. Work has been undertaken through qualitative analysis in order to explore this source of data and utilising the information expressed through these media. However, the manual nature of the analysis means that scope is limited to a small proportion of the hundreds of thousands of authors who are creating content. In our research, we aim to explore utilising text mining to support traditional qualitative analysis of this data. Text mining uses a number of processes in order to extract useful facts from text and analyse patterns within – the ultimate aim is to generate new knowledge by analysing textual data en mass. We developed QuTiP – a Text Mining framework which can enable large scale qualitative analyses of patient narratives shared over social media. In this thesis, we describe QuTiP and our application of the framework to analyse the accounts of patients living with chronic lung disease. As well as a qualitative analysis, we describe our approaches to automated information extraction, term recognition and text classification in order to automatically extract relevant information from blog post data. Within the QuTiP framework, these individual automated approaches can be brought together to support further analyses of large social media datasets

    Learning Clinical Data Representations for Machine Learning

    Get PDF

    Explainable AI for clinical risk prediction: a survey of concepts, methods, and modalities

    Full text link
    Recent advancements in AI applications to healthcare have shown incredible promise in surpassing human performance in diagnosis and disease prognosis. With the increasing complexity of AI models, however, concerns regarding their opacity, potential biases, and the need for interpretability. To ensure trust and reliability in AI systems, especially in clinical risk prediction models, explainability becomes crucial. Explainability is usually referred to as an AI system's ability to provide a robust interpretation of its decision-making logic or the decisions themselves to human stakeholders. In clinical risk prediction, other aspects of explainability like fairness, bias, trust, and transparency also represent important concepts beyond just interpretability. In this review, we address the relationship between these concepts as they are often used together or interchangeably. This review also discusses recent progress in developing explainable models for clinical risk prediction, highlighting the importance of quantitative and clinical evaluation and validation across multiple common modalities in clinical practice. It emphasizes the need for external validation and the combination of diverse interpretability methods to enhance trust and fairness. Adopting rigorous testing, such as using synthetic datasets with known generative factors, can further improve the reliability of explainability methods. Open access and code-sharing resources are essential for transparency and reproducibility, enabling the growth and trustworthiness of explainable research. While challenges exist, an end-to-end approach to explainability in clinical risk prediction, incorporating stakeholders from clinicians to developers, is essential for success

    Personalized data analytics for internet-of-things-based health monitoring

    Get PDF
    The Internet-of-Things (IoT) has great potential to fundamentally alter the delivery of modern healthcare, enabling healthcare solutions outside the limits of conventional clinical settings. It can offer ubiquitous monitoring to at-risk population groups and allow diagnostic care, preventive care, and early intervention in everyday life. These services can have profound impacts on many aspects of health and well-being. However, this field is still at an infancy stage, and the use of IoT-based systems in real-world healthcare applications introduces new challenges. Healthcare applications necessitate satisfactory quality attributes such as reliability and accuracy due to their mission-critical nature, while at the same time, IoT-based systems mostly operate over constrained shared sensing, communication, and computing resources. There is a need to investigate this synergy between the IoT technologies and healthcare applications from a user-centered perspective. Such a study should examine the role and requirements of IoT-based systems in real-world health monitoring applications. Moreover, conventional computing architecture and data analytic approaches introduced for IoT systems are insufficient when used to target health and well-being purposes, as they are unable to overcome the limitations of IoT systems while fulfilling the needs of healthcare applications. This thesis aims to address these issues by proposing an intelligent use of data and computing resources in IoT-based systems, which can lead to a high-level performance and satisfy the stringent requirements. For this purpose, this thesis first delves into the state-of-the-art IoT-enabled healthcare systems proposed for in-home and in-hospital monitoring. The findings are analyzed and categorized into different domains from a user-centered perspective. The selection of home-based applications is focused on the monitoring of the elderly who require more remote care and support compared to other groups of people. In contrast, the hospital-based applications include the role of existing IoT in patient monitoring and hospital management systems. Then, the objectives and requirements of each domain are investigated and discussed. This thesis proposes personalized data analytic approaches to fulfill the requirements and meet the objectives of IoT-based healthcare systems. In this regard, a new computing architecture is introduced, using computing resources in different layers of IoT to provide a high level of availability and accuracy for healthcare services. This architecture allows the hierarchical partitioning of machine learning algorithms in these systems and enables an adaptive system behavior with respect to the user's condition. In addition, personalized data fusion and modeling techniques are presented, exploiting multivariate and longitudinal data in IoT systems to improve the quality attributes of healthcare applications. First, a real-time missing data resilient decision-making technique is proposed for health monitoring systems. The technique tailors various data resources in IoT systems to accurately estimate health decisions despite missing data in the monitoring. Second, a personalized model is presented, enabling variations and event detection in long-term monitoring systems. The model evaluates the sleep quality of users according to their own historical data. Finally, the performance of the computing architecture and the techniques are evaluated in this thesis using two case studies. The first case study consists of real-time arrhythmia detection in electrocardiography signals collected from patients suffering from cardiovascular diseases. The second case study is continuous maternal health monitoring during pregnancy and postpartum. It includes a real human subject trial carried out with twenty pregnant women for seven months
    corecore