5 research outputs found

    Leveraging digital for a research environment

    No full text
    Clinical research is fundamental in acquiring evidence to improve healthcare. Digitalisation has enabled new opportunities for research. The ability to collect, store, process, and analyse vast amounts of data in structured and unstructured format supports both care processes and secondary use of collected data for generating research evidence. However, issues with data quality, the limitations of available technologies and infrastructure, as well as a lack of competence regarding context, substance, and data processing hinder the efficient and safe use of data for research. This may also lead to misinterpretations and unfounded conclusions. It is therefore important for all actors involved in collecting and using data to understand their role in these processes and have competence to critically analyse and systematically improve their part. Collaboration and co-creation between practitioners, researchers, and service users, among different disciplines and professions is needed to understand the perspectives, needs, risks, possibilities and contributions of all involved. This chapter discusses; 1) the role of nurses and midwives in generating data that enables research, 2) technologies available for nurse and midwifery scientists, and 3) how data is transformed to support evidence-based practice for better outcomes

    A qualitative analysis of stigmatizing language in birth admission clinical notes

    No full text
    Funding Information: This project was supported by funding from the Columbia University Data Science Institute Seeds Funds Program and a grant (GBMF9048) from the Gordon and Betty Moore Foundation. Publisher Copyright: © 2023 The Authors. Nursing Inquiry published by John Wiley & Sons Ltd.The presence of stigmatizing language in the electronic health record (EHR) has been used to measure implicit biases that underlie health inequities. The purpose of this study was to identify the presence of stigmatizing language in the clinical notes of pregnant people during the birth admission. We conducted a qualitative analysis on N = 1117 birth admission EHR notes from two urban hospitals in 2017. We identified stigmatizing language categories, such as Disapproval (39.3%), Questioning patient credibility (37.7%), Difficult patient (21.3%), Stereotyping (1.6%), and Unilateral decisions (1.6%) in 61 notes (5.4%). We also defined a new stigmatizing language category indicating Power/privilege. This was present in 37 notes (3.3%) and signaled approval of social status, upholding a hierarchy of bias. The stigmatizing language was most frequently identified in birth admission triage notes (16%) and least frequently in social work initial assessments (13.7%). We found that clinicians from various disciplines recorded stigmatizing language in the medical records of birthing people. This language was used to question birthing people's credibility and convey disapproval of decision-making abilities for themselves or their newborns. We reported a Power/privilege language bias in the inconsistent documentation of traits considered favorable for patient outcomes (e.g., employment status). Future work on stigmatizing language may inform tailored interventions to improve perinatal outcomes for all birthing people and their families.Peer reviewe

    Evaluating large language models on medical evidence summarization

    No full text
    Abstract Recent advances in large language models (LLMs) have demonstrated remarkable successes in zero- and few-shot performance on various downstream tasks, paving the way for applications in high-stakes domains. In this study, we systematically examine the capabilities and limitations of LLMs, specifically GPT-3.5 and ChatGPT, in performing zero-shot medical evidence summarization across six clinical domains. We conduct both automatic and human evaluations, covering several dimensions of summary quality. Our study demonstrates that automatic metrics often do not strongly correlate with the quality of summaries. Furthermore, informed by our human evaluations, we define a terminology of error types for medical evidence summarization. Our findings reveal that LLMs could be susceptible to generating factually inconsistent summaries and making overly convincing or uncertain statements, leading to potential harm due to misinformation. Moreover, we find that models struggle to identify the salient information and are more error-prone when summarizing over longer textual contexts
    corecore