5 research outputs found
Recommended from our members
Improving Eligibility Prescreening for Alzheimer’s Disease and Related Dementias Clinical Trials with Natural Language Processing
Alzheimer’s disease and related dementias (ADRD) are among the leading causes of disability and mortality among the older population worldwide and a costly public health issue, yet there is still no treatment for prevention or cure. Clinical trials are available, but successful recruitment has been a longstanding challenge. One strategy to improve recruitment is conducting eligibility prescreening, a resource-intensive process where clinical research staff manually go through electronic health records to identify potentially eligible patients. Natural language processing (NLP), an informatics approach used to extract relevant data from various structured and unstructured data types, may improve eligibility prescreening for ADRD clinical trials.
Guided by the Fit between Individuals, Task, and Technology framework, this dissertation research aims to optimize eligibility prescreening for ADRD clinical research by evaluating the sociotechnical factors influencing the adoption of NLP-driven tools. A systematic review of the literature was done to identify NLP systems that have been used for eligibility prescreening in clinical research. Following this, three NLP-driven tools were evaluated in ADRD clinical research eligibility prescreening: Criteria2Query, i2b2, and Leaf. We conducted an iterative mixed-methods usability evaluation with twenty clinical research staff using a cognitive walkthrough with a think-aloud protocol, Post-Study System Usability Questionnaire, and a directed deductive content analysis. Moreover, we conducted a cognitive task analysis with sixty clinical research staff to assess the impact of cognitive complexity on the usability of NLP systems and identify the sociotechnical gaps and cognitive support needed in using NLP systems for ADRD clinical research eligibility prescreening.
The results show that understanding the role of NLP systems in improving eligibility prescreening is critical to the advancement of clinical research recruitment. All three systems are generally usable and accepted by a group of clinical research staff. The cognitive walkthrough and a think-aloud protocol informed iterative system refinement, resulting in high system usability. Cognitive complexity has no significant effect on system usability; however, the system, order of evaluation, job position, and computer literacy are associated with system usability. Key recommendations for system development and implementation include improving system intuitiveness and overall user experience through comprehensive consideration of user needs and task completion requirements; and implementing a focused training on database query to improve clinical research staff’s aptitude in eligibility prescreening and advance workforce competency.
Finally, this study contributes to our understanding of the conduct of electronic eligibility prescreening for ADRD clinical research by clinical research staff. Findings from this study highlighted the importance of leveraging human-computer collaboration in conducting eligibility prescreening using NLP-driven tools, which provide an opportunity to identify and enroll participants of diverse backgrounds who are eligible for ADRD clinical research and accelerate treatment development
Leveraging digital for a research environment
Clinical research is fundamental in acquiring evidence to improve healthcare. Digitalisation has enabled new opportunities for research. The ability to collect, store, process, and analyse vast amounts of data in structured and unstructured format supports both care processes and secondary use of collected data for generating research evidence. However, issues with data quality, the limitations of available technologies and infrastructure, as well as a lack of competence regarding context, substance, and data processing hinder the efficient and safe use of data for research. This may also lead to misinterpretations and unfounded conclusions. It is therefore important for all actors involved in collecting and using data to understand their role in these processes and have competence to critically analyse and systematically improve their part. Collaboration and co-creation between practitioners, researchers, and service users, among different disciplines and professions is needed to understand the perspectives, needs, risks, possibilities and contributions of all involved. This chapter discusses; 1) the role of nurses and midwives in generating data that enables research, 2) technologies available for nurse and midwifery scientists, and 3) how data is transformed to support evidence-based practice for better outcomes
A qualitative analysis of stigmatizing language in birth admission clinical notes
Funding Information: This project was supported by funding from the Columbia University Data Science Institute Seeds Funds Program and a grant (GBMF9048) from the Gordon and Betty Moore Foundation. Publisher Copyright: © 2023 The Authors. Nursing Inquiry published by John Wiley & Sons Ltd.The presence of stigmatizing language in the electronic health record (EHR) has been used to measure implicit biases that underlie health inequities. The purpose of this study was to identify the presence of stigmatizing language in the clinical notes of pregnant people during the birth admission. We conducted a qualitative analysis on N = 1117 birth admission EHR notes from two urban hospitals in 2017. We identified stigmatizing language categories, such as Disapproval (39.3%), Questioning patient credibility (37.7%), Difficult patient (21.3%), Stereotyping (1.6%), and Unilateral decisions (1.6%) in 61 notes (5.4%). We also defined a new stigmatizing language category indicating Power/privilege. This was present in 37 notes (3.3%) and signaled approval of social status, upholding a hierarchy of bias. The stigmatizing language was most frequently identified in birth admission triage notes (16%) and least frequently in social work initial assessments (13.7%). We found that clinicians from various disciplines recorded stigmatizing language in the medical records of birthing people. This language was used to question birthing people's credibility and convey disapproval of decision-making abilities for themselves or their newborns. We reported a Power/privilege language bias in the inconsistent documentation of traits considered favorable for patient outcomes (e.g., employment status). Future work on stigmatizing language may inform tailored interventions to improve perinatal outcomes for all birthing people and their families.Peer reviewe
Evaluating large language models on medical evidence summarization
Abstract Recent advances in large language models (LLMs) have demonstrated remarkable successes in zero- and few-shot performance on various downstream tasks, paving the way for applications in high-stakes domains. In this study, we systematically examine the capabilities and limitations of LLMs, specifically GPT-3.5 and ChatGPT, in performing zero-shot medical evidence summarization across six clinical domains. We conduct both automatic and human evaluations, covering several dimensions of summary quality. Our study demonstrates that automatic metrics often do not strongly correlate with the quality of summaries. Furthermore, informed by our human evaluations, we define a terminology of error types for medical evidence summarization. Our findings reveal that LLMs could be susceptible to generating factually inconsistent summaries and making overly convincing or uncertain statements, leading to potential harm due to misinformation. Moreover, we find that models struggle to identify the salient information and are more error-prone when summarizing over longer textual contexts