20 research outputs found
Improving Electronic Health Record Note Comprehension With NoteAid: Randomized Trial of Electronic Health Record Note Comprehension Interventions With Crowdsourced Workers
BACKGROUND: Patient portals are becoming more common, and with them, the ability of patients to access their personal electronic health records (EHRs). EHRs, in particular the free-text EHR notes, often contain medical jargon and terms that are difficult for laypersons to understand. There are many Web-based resources for learning more about particular diseases or conditions, including systems that directly link to lay definitions or educational materials for medical concepts.
OBJECTIVE: Our goal is to determine whether use of one such tool, NoteAid, leads to higher EHR note comprehension ability. We use a new EHR note comprehension assessment tool instead of patient self-reported scores.
METHODS: In this work, we compare a passive, self-service educational resource (MedlinePlus) with an active resource (NoteAid) where definitions are provided to the user for medical concepts that the system identifies. We use Amazon Mechanical Turk (AMT) to recruit individuals to complete ComprehENotes, a new test of EHR note comprehension.
RESULTS: Mean scores for individuals with access to NoteAid are significantly higher than the mean baseline scores, both for raw scores (P=.008) and estimated ability (P=.02).
CONCLUSIONS: In our experiments, we show that the active intervention leads to significantly higher scores on the comprehension test as compared with a baseline group with no resources provided. In contrast, there is no significant difference between the group that was provided with the passive intervention and the baseline group. Finally, we analyze the demographics of the individuals who participated in our AMT task and show differences between groups that align with the current understanding of health literacy between populations. This is the first work to show improvements in comprehension using tools such as NoteAid as measured by an EHR note comprehension assessment tool as opposed to patient self-reported scores
Recommended from our members
Improving Patients\u27 Understanding of their Electronic Medical Record Data in Order to Improve Self-Management - A Quality Improvement Project
Background: Patients are increasingly given access to their electronic medical records (EMRs) to help them keep track of their care, but many may have a difficult time understanding what is in them. Programs such as NoteAid assist in translating medical records and may increase the number of patients who actively use their EMRs, a development which may improve the management of chronic diseases.
Purpose: To work on a translation system developed by the University of Massachusetts Informatics group to make outpatient records more understandable for adult patients with chronic disease by using and testing a machine-learning database (NoteAid). Patients’ self-management of chronic disease may improve, as they increase their understanding of medical terminology.
Methods: A test version of NoteAid was used with volunteer adult patients during face-to-face sessions in an outpatient office at a health system in Southeastern Pennsylvania. These sessions were used to test NoteAid’s effectiveness as a tool to improve patients’ understanding of their EMRs. Patients read their own office note from a recent visit without the use of NoteAid, and then interpreted the same note using it.
Results: 13 participants participated over a two-month period with 85% reporting they would use the system from a patient portal and 100% answering strongly agree or agree when asked if the NoteAid system helped them comprehend their clinical EMR notes.
Conclusions: Machine-learning databases like NoteAid have the potential to improve the management of chronic diseases. By integrating these systems into an informative and user-friendly portal, patients are afforded the opportunity to improve understanding of their EMRs.
Keywords: medical terms, patient understanding, health literacy, chronic disease, and electronic health record usabilit
A Natural Language Processing System That Links Medical Terms in Electronic Health Record Notes to Lay Definitions: System Development Using Physician Reviews
BACKGROUND: Many health care systems now allow patients to access their electronic health record (EHR) notes online through patient portals. Medical jargon in EHR notes can confuse patients, which may interfere with potential benefits of patient access to EHR notes.
OBJECTIVE: The aim of this study was to develop and evaluate the usability and content quality of NoteAid, a Web-based natural language processing system that links medical terms in EHR notes to lay definitions, that is, definitions easily understood by lay people.
METHODS: NoteAid incorporates two core components: CoDeMed, a lexical resource of lay definitions for medical terms, and MedLink, a computational unit that links medical terms to lay definitions. We developed innovative computational methods, including an adapted distant supervision algorithm to prioritize medical terms important for EHR comprehension to facilitate the effort of building CoDeMed. Ten physician domain experts evaluated the user interface and content quality of NoteAid. The evaluation protocol included a cognitive walkthrough session and a postsession questionnaire. Physician feedback sessions were audio-recorded. We used standard content analysis methods to analyze qualitative data from these sessions.
RESULTS: Physician feedback was mixed. Positive feedback on NoteAid included (1) Easy to use, (2) Good visual display, (3) Satisfactory system speed, and (4) Adequate lay definitions. Opportunities for improvement arising from evaluation sessions and feedback included (1) improving the display of definitions for partially matched terms, (2) including more medical terms in CoDeMed, (3) improving the handling of terms whose definitions vary depending on different contexts, and (4) standardizing the scope of definitions for medicines. On the basis of these results, we have improved NoteAid\u27s user interface and a number of definitions, and added 4502 more definitions in CoDeMed.
CONCLUSIONS: Physician evaluation yielded useful feedback for content validation and refinement of this innovative tool that has the potential to improve patient EHR comprehension and experience using patient portals. Future ongoing work will develop algorithms to handle ambiguous medical terms and test and evaluate NoteAid with patients
Recommended from our members
Noteaid: A Comprehension Tool to Improve Patient Understanding
Background: Meaningful use mandates allow patients access to provider notes, however, there remain many barriers including the inability of the patient to understand the notes.
This project surveyed clinicians on the informatics committee of a large tertiary care facility about their thoughts regarding the Noteaid translation system after presenting examples of translated patient notes and education about the meaningful use mandate.
Methods: An online PowerPoint presentation and preintervention survey was distributed, followed by a live educational intervention. The members were emailed a post intervention survey about the effectiveness, likeability, usability, and practicality of the Noteaid software tool to translate medical jargon.
Results: Of the 20 participants, 45% stated they spent more than 40% of their time on patient education and teaching. Most were unaware of the meaningful use mandate, and 68% believed that the release of provider notes alone could not improve the quality of care and/or effect patient outcomes. After the presentation, 100% liked the Noteaid system and 75% believed the system could improve outcomes by improving patient understanding. The majority (80%) rated both of the translated note examples as a 4 on a 5-point rating scale.
Conclusion: Solutions to patient understanding of medical notes are needed. Noteaid, is a systematic solution that was positively reviewed by this group of clinicians as being a helpful tool for patients in understanding their own medical notes. The meaningful use mandate has the potential to improve patient care and better educate patients
Recommended from our members
Increasing Health Literacy through NoteAid Translational Tool in Nursing
Background: Many adults in the United States lack health literacy necessary to understand patient education materials given to them, such as discharge summary. Re-hospitalization rates are higher due to poor transition of care planning. Older adults may only be provided with written instruction for their complex chronic conditions with multiple changes in medical or treatment plan, or uncommon surgical procedures. Nurses are instrumental in bridging the gap, as they often educate, advocate, and use health technology.
Purpose: To educate nurses on the availability of NoteAid as a natural language translation system that can help increase their comprehension of electronic health records through participant evaluation how it can improve quality of care in transition of care process.
Methods: Ten nurses were presented NoteAid as a PowerPoint presentation. A post-evaluation including feedback, and knowledge question on part of sample Assessment and Plan from discharge summary with an overlay of NoteAid was included. The discharge summary was evaluated using Patient Education Materials Assessment Tool (PEMAT).
Results: Although there is a need for a larger sample size, the ten nurse participants were positive that NoteAid can be useful in improving their older patient’s understanding of health teaching from discharge or visit summaries. However, the aging population may find it challenging, depending on their ability to use computers or technology available.
Implications: To decrease re-hospitalization rates, reduce common medical errors, and promote patient or family advocacy. This can help Hospital systems meet most of the “five pillars” of health outcomes for EHR meaningful use
Adverse Drug Event Detection, Causality Inference, Patient Communication and Translational Research
Adverse drug events (ADEs) are injuries resulting from a medical intervention related to a drug. ADEs are responsible for nearly 20% of all the adverse events that occur in hospitalized patients. ADEs have been shown to increase the cost of health care and the length of stays in hospital. Therefore, detecting and preventing ADEs for pharmacovigilance is an important task that can improve the quality of health care and reduce the cost in a hospital setting. In this dissertation, we focus on the development of ADEtector, a system that identifies ADEs and medication information from electronic medical records and the FDA Adverse Event Reporting System reports. The ADEtector system employs novel natural language processing approaches for ADE detection and provides a user interface to display ADE information. The ADEtector employs machine learning techniques to automatically processes the narrative text and identify the adverse event (AE) and medication entities that appear in that narrative text. The system will analyze the entities recognized to infer the causal relation that exists between AEs and medications by automating the elements of Naranjo score using knowledge and rule based approaches. The Naranjo Adverse Drug Reaction Probability Scale is a validated tool for finding the causality of a drug induced adverse event or ADE. The scale calculates the likelihood of an adverse event related to drugs based on a list of weighted questions. The ADEtector also presents the user with evidence for ADEs by extracting figures that contain ADE related information from biomedical literature. A brief summary is generated for each of the figures that are extracted to help users better comprehend the figure. This will further enhance the user experience in understanding the ADE information better. The ADEtector also helps patients better understand the narrative text by recognizing complex medical jargon and abbreviations that appear in the text and providing definitions and explanations for them from external knowledge resources. This system could help clinicians and researchers in discovering novel ADEs and drug relations and also hypothesize new research questions within the ADE domain
Recommended from our members
Learning Latent Characteristics of Data and Models using Item Response Theory
A supervised machine learning model is trained with a large set of labeled training data, and evaluated on a smaller but still large set of test data. Especially with deep neural networks (DNNs), the complexity of the model requires that an extremely large data set is collected to prevent overfitting. It is often the case that these models do not take into account specific attributes of the training set examples, but instead treat each equally in the process of model training. This is due to the fact that it is difficult to model latent traits of individual examples at the scale of hundreds of thousands or millions of data points. However, there exist a set of psychometric methods that can model attributes of specific examples and can greatly improve model training and evaluation in the supervised learning process.
Item Response Theory (IRT) is a well-studied psychometric methodology for scale construction and evaluation. IRT jointly models human ability and example characteristics such as difficulty based on human response data. We introduce new evaluation metrics for both humans and machine learning models build using IRT, and propose new methods for applying IRT to machine learning-scale data.
We use IRT to make contributions to the machine learning community in the following areas: (i) new test sets for evaluating machine learning models with respect to a human population, (ii) new insights about how deep-learning models learn by tracking example difficulty and training conditions, and (iii) new methods for data selection and curriculum building to improve model training efficiency, (iv) a new test of electronic health literacy built with questions extracted from de-identified patient Electronic Health Records (EHRs).
We first introduce two new evaluation sets built and validated using IRT. These tests are the first IRT test sets to be applied to natural language processing tasks. Using IRT test sets allows for more comprehensive comparison of NLP models. Second, by modeling the difficulty of test set examples, we identify patterns that emerge when training deep neural network models that are consistent with human learning patterns. Specifically, as models are trained with larger training sets, they learn easy test set examples more quickly than hard examples. Third, we present a method for using soft labels on a subset of training data to improve deep learning model generalization. We show that fine-tuning a trained deep neural network with as little as 0.1% of the training data can improve model generalization in terms of test set accuracy. Fourth, we propose a new method for estimating IRT example and model parameters that allows for learning parameters at a much larger scale than previously available to accommodate the large data sets required for deep learning. This allows for learning IRT models at machine learning scale, with hundreds of thousands of examples and large ensembles of machine learning models. The response patterns of machine learning models can be used to learn IRT example characteristics instead of human response patterns. Fifth, we introduce a dynamic curriculum learning process that estimates model competency during training to adaptively select training data that is appropriate for learning at the given epoch. Finally, we introduce the ComprehENotes test, the first test of EHR comprehension for humans. The test is an accurate measure for identifying individuals with low EHR note comprehension ability, and validates the effectiveness of previously self-reported patient comprehension evaluations
The use of Natural Language Processing techniques to support Health Literacy: an evidence-based review
Background and objectives: To conduct a literature search and analysis of the existing research using natural language processing for improving or helping health literacy, as well as to discuss the importance and potentials of addressing both fields in a joint manner. This review targets researchers who are unfamiliar with natural language processing in the field of health literacy, and in general, any researcher, regardless of his or her background, interested in multi-disciplinary research involving technology and health care. Methods: We introduce the concepts of health literacy and natural language processing. Then, a thorough search is performed using relevant databases and well-defined criteria. We review the existing literature addressing these topics, both in an independent and joint manner, and provide an overview of the state of the art using natural language processing in health literacy. We additionally discuss how the different issues in health literacy that are related to the comprehension of specialised health texts can be improved using natural language processing techniques, and the challenges involved in these processes. Results: The search process yielded 235 potential relevant references, 49 of which fully fulfilled the established search criteria, and therefore they were later analysed in more detail. These articles were clustered into groups with respect to their purpose, and most of them were focused on the development of specific natural language processing modules, such as question answering, information retrieval, text simplification or natural language generation in order to facilitate the understanding of health information.This research work has been partially funded by the University of Alicante, Generalitat Valenciana, Spanish Government and the European Commission through the projects, "Tratamiento inteligente de la informacion para la ayuda a la toma de decisiones" (GRE12-44), "Explotacion y tratamiento de la informacion disponible en Internet para la anotacion y generacion de textos adaptados al usuario" (GRE13-15), DIIM2.0 (PROMETEOII/2014/001), ATTOS (TIN2012-38536-C03-03), LEGOLANG-UAGE (TIN2012-31224), SAM (FP7-611312), and FIRST (FP7-287607)
Developing patient-friendly genetic and genomic test reports: formats to promote patient engagement and understanding
10.1186/s13073-014-0058-6Genome Medicine675