20 research outputs found
Artificial intelligence in clinical and translational science: Successes, challenges and opportunities
Artificial intelligence (AI) is transforming many domains, including finance, agriculture, defense, and biomedicine. In this paper, we focus on the role of AI in clinical and translational research (CTR), including preclinical research (T1), clinical research (T2), clinical implementation (T3), and public (or population) health (T4). Given the rapid evolution of AI in CTR, we present three complementary perspectives: (1) scoping literature review, (2) survey, and (3) analysis of federally funded projects. For each CTR phase, we addressed challenges, successes, failures, and opportunities for AI. We surveyed Clinical and Translational Science Award (CTSA) hubs regarding AI projects at their institutions. Nineteen of 63 CTSA hubs (30%) responded to the survey. The most common funding source (48.5%) was the federal government. The most common translational phase was T2 (clinical research, 40.2%). Clinicians were the intended users in 44.6% of projects and researchers in 32.3% of projects. The most common computational approaches were supervised machine learning (38.6%) and deep learning (34.2%). The number of projects steadily increased from 2012 to 2020. Finally, we analyzed 2604 AI projects at CTSA hubs using the National Institutes of Health Research Portfolio Online Reporting Tools (RePORTER) database for 2011-2019. We mapped available abstracts to medical subject headings and found that nervous system (16.3%) and mental disorders (16.2) were the most common topics addressed. From a computational perspective, big data (32.3%) and deep learning (30.0%) were most common. This work represents a snapshot in time of the role of AI in the CTSA program
Toward Standardization, Harmonization, and Integration of Social Determinants of Health Data: A Texas Clinical and Translational Science Award Institutions Collaboration
INTRODUCTION: The focus on social determinants of health (SDOH) and their impact on health outcomes is evident in U.S. federal actions by Centers for Medicare & Medicaid Services and Office of National Coordinator for Health Information Technology. The disproportionate impact of COVID-19 on minorities and communities of color heightened awareness of health inequities and the need for more robust SDOH data collection. Four Clinical and Translational Science Award (CTSA) hubs comprising the Texas Regional CTSA Consortium (TRCC) undertook an inventory to understand what contextual-level SDOH datasets are offered centrally and which individual-level SDOH are collected in structured fields in each electronic health record (EHR) system potentially for all patients.
METHODS: Hub teams identified American Community Survey (ACS) datasets available via their enterprise data warehouses for research. Each hub\u27s EHR analyst team identified structured fields available in their EHR for SDOH using a collection instrument based on a 2021 PCORnet survey and conducted an SDOH field completion rate analysis.
RESULTS: One hub offered ACS datasets centrally. All hubs collected eleven SDOH elements in structured EHR fields. Two collected Homeless and Veteran statuses. Completeness at four hubs was 80%-98%: Ethnicity, Race; \u3c 10%: Education, Financial Strain, Food Insecurity, Housing Security/Stability, Interpersonal Violence, Social Isolation, Stress, Transportation.
CONCLUSION: Completeness levels for SDOH data in EHR at TRCC hubs varied and were low for most measures. Multiple system-level discussions may be necessary to increase standardized SDOH EHR-based data collection and harmonization to drive effective value-based care, health disparities research, translational interventions, and evidence-based policy
Erratum: Toward Standardization, Harmonization, and Integration of Social Determinants of Health Data: A Texas Clinical and Translational Science Award Institutions Collaboration - Corrigendum
This corrects the article Toward standardization, harmonization, and integration of social determinants of health data: A Texas Clinical and Translational Science Award institutions collaboration in volume 8, e17
Toward Standardization, Harmonization, and Integration of Social Determinants of Health Data: A Texas Clinical and Translational Science Award Institutions Collaboration
INTRODUCTION: The focus on social determinants of health (SDOH) and their impact on health outcomes is evident in U.S. federal actions by Centers for Medicare & Medicaid Services and Office of National Coordinator for Health Information Technology. The disproportionate impact of COVID-19 on minorities and communities of color heightened awareness of health inequities and the need for more robust SDOH data collection. Four Clinical and Translational Science Award (CTSA) hubs comprising the Texas Regional CTSA Consortium (TRCC) undertook an inventory to understand what contextual-level SDOH datasets are offered centrally and which individual-level SDOH are collected in structured fields in each electronic health record (EHR) system potentially for all patients.
METHODS: Hub teams identified American Community Survey (ACS) datasets available via their enterprise data warehouses for research. Each hub\u27s EHR analyst team identified structured fields available in their EHR for SDOH using a collection instrument based on a 2021 PCORnet survey and conducted an SDOH field completion rate analysis.
RESULTS: One hub offered ACS datasets centrally. All hubs collected eleven SDOH elements in structured EHR fields. Two collected Homeless and Veteran statuses. Completeness at four hubs was 80%-98%: Ethnicity, Race; \u3c 10%: Education, Financial Strain, Food Insecurity, Housing Security/Stability, Interpersonal Violence, Social Isolation, Stress, Transportation.
CONCLUSION: Completeness levels for SDOH data in EHR at TRCC hubs varied and were low for most measures. Multiple system-level discussions may be necessary to increase standardized SDOH EHR-based data collection and harmonization to drive effective value-based care, health disparities research, translational interventions, and evidence-based policy
Measuring and controlling medical record abstraction (MRA) error rates in an observational study.
BACKGROUND: Studies have shown that data collection by medical record abstraction (MRA) is a significant source of error in clinical research studies relying on secondary use data. Yet, the quality of data collected using MRA is seldom assessed. We employed a novel, theory-based framework for data quality assurance and quality control of MRA. The objective of this work is to determine the potential impact of formalized MRA training and continuous quality control (QC) processes on data quality over time.
METHODS: We conducted a retrospective analysis of QC data collected during a cross-sectional medical record review of mother-infant dyads with Neonatal Opioid Withdrawal Syndrome. A confidence interval approach was used to calculate crude (Wald\u27s method) and adjusted (generalized estimating equation) error rates over time. We calculated error rates using the number of errors divided by total fields ( all-field error rate) and populated fields ( populated-field error rate) as the denominators, to provide both an optimistic and a conservative measurement, respectively.
RESULTS: On average, the ACT NOW CE Study maintained an error rate between 1% (optimistic) and 3% (conservative). Additionally, we observed a decrease of 0.51 percentage points with each additional QC Event conducted.
CONCLUSIONS: Formalized MRA training and continuous QC resulted in lower error rates than have been found in previous literature and a decrease in error rates over time. This study newly demonstrates the importance of continuous process controls for MRA within the context of a multi-site clinical research study
Factors Affecting Accuracy of Data Abstracted from Medical Records
<div><p>Objective</p><p>Medical record abstraction (MRA) is often cited as a significant source of error in research data, yet MRA methodology has rarely been the subject of investigation. Lack of a common framework has hindered application of the extant literature in practice, and, until now, there were no evidence-based guidelines for ensuring data quality in MRA. We aimed to identify the factors affecting the accuracy of data abstracted from medical records and to generate a framework for data quality assurance and control in MRA.</p><p>Methods</p><p>Candidate factors were identified from published reports of MRA. Content validity of the top candidate factors was assessed via a four-round two-group Delphi process with expert abstractors with experience in clinical research, registries, and quality improvement. The resulting coded factors were categorized into a control theory-based framework of MRA. Coverage of the framework was evaluated using the recent published literature.</p><p>Results</p><p>Analysis of the identified articles yielded 292 unique factors that affect the accuracy of abstracted data. Delphi processes overall refuted three of the top factors identified from the literature based on importance and five based on reliability (six total factors refuted). Four new factors were identified by the Delphi. The generated framework demonstrated comprehensive coverage. Significant underreporting of MRA methodology in recent studies was discovered.</p><p>Conclusion</p><p>The framework generated from this research provides a guide for planning data quality assurance and control for studies using MRA. The large number and variability of factors indicate that while prospective quality assurance likely increases the accuracy of abstracted data, monitoring the accuracy during the abstraction process is also required. Recent studies reporting research results based on MRA rarely reported data quality assurance or control measures, and even less frequently reported data quality metrics with research results. Given the demonstrated variability, these methods and measures should be reported with research results.</p></div
Refuted and uncertain factors.
<p>Marked mean values in the table are those rated lower than neutral. Marked standard deviation values in the table are those that were above the standard deviation of 1.2 cut-off.</p><p><sup>*</sup> Comments mentioning a mitigating factor as well as justification for participants’ response were split into two. The factor “Abstractors with different levels of experience” had two comments split; remaining marked factors had one comment split.</p><p>Abbreviations: QI, quality improvement; RN, registered nurse.</p><p>Refuted and uncertain factors.</p
Framework for increasing data accuracy in MRA.
<p><sup>*</sup> Opposite valence factors, “Lack of abstractor training decreases accuracy of abstracted data,” “An incomplete review of the medical record (e.g., not reading all pages from the required time period) decreases the accuracy of abstracted data,” “Data element definitions that lack suggestions for where in the chart to find data values,” “Data abstracted from a complete medical record are more accurate than those that are abstracted from medical records with omissions,” “Abstractor (human) error is a factor in decreasing the accuracy of abstracted data,” and “Data abstracted from a medical record that is free from error are more accurate than those abstracted from a medical record containing errors,” were omitted from framework.</p><p><sup>†</sup> Combined factors “Misuse of the coding system” and “Misunderstanding the coding system,” and moved to the training category.</p><p><sup>‡</sup> Original text “Abstractor human error” restated to create an actionable item.</p><p><sup>§</sup> “Data elements requiring the abstractor to do calculations (e.g., convert units or score questionnaires) are less accurate than those that do not” and “Data elements that are abstracted directly from medical records) are more accurate than those requiring mapping or interpretation” were combined.</p><p>Framework for increasing data accuracy in MRA.</p