15 research outputs found

    Measuring and controlling medical record abstraction (MRA) error rates in an observational study.

    Get PDF
    BACKGROUND: Studies have shown that data collection by medical record abstraction (MRA) is a significant source of error in clinical research studies relying on secondary use data. Yet, the quality of data collected using MRA is seldom assessed. We employed a novel, theory-based framework for data quality assurance and quality control of MRA. The objective of this work is to determine the potential impact of formalized MRA training and continuous quality control (QC) processes on data quality over time. METHODS: We conducted a retrospective analysis of QC data collected during a cross-sectional medical record review of mother-infant dyads with Neonatal Opioid Withdrawal Syndrome. A confidence interval approach was used to calculate crude (Wald\u27s method) and adjusted (generalized estimating equation) error rates over time. We calculated error rates using the number of errors divided by total fields ( all-field error rate) and populated fields ( populated-field error rate) as the denominators, to provide both an optimistic and a conservative measurement, respectively. RESULTS: On average, the ACT NOW CE Study maintained an error rate between 1% (optimistic) and 3% (conservative). Additionally, we observed a decrease of 0.51 percentage points with each additional QC Event conducted. CONCLUSIONS: Formalized MRA training and continuous QC resulted in lower error rates than have been found in previous literature and a decrease in error rates over time. This study newly demonstrates the importance of continuous process controls for MRA within the context of a multi-site clinical research study

    Factors Affecting Accuracy of Data Abstracted from Medical Records

    No full text
    <div><p>Objective</p><p>Medical record abstraction (MRA) is often cited as a significant source of error in research data, yet MRA methodology has rarely been the subject of investigation. Lack of a common framework has hindered application of the extant literature in practice, and, until now, there were no evidence-based guidelines for ensuring data quality in MRA. We aimed to identify the factors affecting the accuracy of data abstracted from medical records and to generate a framework for data quality assurance and control in MRA.</p><p>Methods</p><p>Candidate factors were identified from published reports of MRA. Content validity of the top candidate factors was assessed via a four-round two-group Delphi process with expert abstractors with experience in clinical research, registries, and quality improvement. The resulting coded factors were categorized into a control theory-based framework of MRA. Coverage of the framework was evaluated using the recent published literature.</p><p>Results</p><p>Analysis of the identified articles yielded 292 unique factors that affect the accuracy of abstracted data. Delphi processes overall refuted three of the top factors identified from the literature based on importance and five based on reliability (six total factors refuted). Four new factors were identified by the Delphi. The generated framework demonstrated comprehensive coverage. Significant underreporting of MRA methodology in recent studies was discovered.</p><p>Conclusion</p><p>The framework generated from this research provides a guide for planning data quality assurance and control for studies using MRA. The large number and variability of factors indicate that while prospective quality assurance likely increases the accuracy of abstracted data, monitoring the accuracy during the abstraction process is also required. Recent studies reporting research results based on MRA rarely reported data quality assurance or control measures, and even less frequently reported data quality metrics with research results. Given the demonstrated variability, these methods and measures should be reported with research results.</p></div

    Refuted and uncertain factors.

    No full text
    <p>Marked mean values in the table are those rated lower than neutral. Marked standard deviation values in the table are those that were above the standard deviation of 1.2 cut-off.</p><p><sup>*</sup> Comments mentioning a mitigating factor as well as justification for participants’ response were split into two. The factor “Abstractors with different levels of experience” had two comments split; remaining marked factors had one comment split.</p><p>Abbreviations: QI, quality improvement; RN, registered nurse.</p><p>Refuted and uncertain factors.</p

    Framework for increasing data accuracy in MRA.

    No full text
    <p><sup>*</sup> Opposite valence factors, “Lack of abstractor training decreases accuracy of abstracted data,” “An incomplete review of the medical record (e.g., not reading all pages from the required time period) decreases the accuracy of abstracted data,” “Data element definitions that lack suggestions for where in the chart to find data values,” “Data abstracted from a complete medical record are more accurate than those that are abstracted from medical records with omissions,” “Abstractor (human) error is a factor in decreasing the accuracy of abstracted data,” and “Data abstracted from a medical record that is free from error are more accurate than those abstracted from a medical record containing errors,” were omitted from framework.</p><p><sup>†</sup> Combined factors “Misuse of the coding system” and “Misunderstanding the coding system,” and moved to the training category.</p><p><sup>‡</sup> Original text “Abstractor human error” restated to create an actionable item.</p><p><sup>§</sup> “Data elements requiring the abstractor to do calculations (e.g., convert units or score questionnaires) are less accurate than those that do not” and “Data elements that are abstracted directly from medical records) are more accurate than those requiring mapping or interpretation” were combined.</p><p>Framework for increasing data accuracy in MRA.</p

    Factors identified in Delphi Round 1 that were not in the literature.

    No full text
    <p><sup>*</sup> Not complete semantic matches at the detail level at which they were mentioned but conceptually part of higher-level factors or related to factors mentioned in the literature.</p><p><sup>†</sup> Not mentioned at all in the articles included in the systematic review.</p><p><sup>‡</sup> Ultimately not upheld in Delphi Round 4.</p><p>Factors identified in Delphi Round 1 that were not in the literature.</p

    Factors identified in Delphi Round 1 that were not in the literature top 26%.

    No full text
    <p><sup>*</sup> Found in the literature top 26% but with opposite valence.</p><p><sup>†</sup> Ultimately not upheld in Delphi Round 4.</p><p>Factors identified in Delphi Round 1 that were not in the literature top 26%.</p
    corecore