24 research outputs found

    Effects of automated alerts on unnecessarily repeated serology tests in a cardiovascular surgery department: a time series analysis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Laboratory testing is frequently unnecessary, particularly repetitive testing. Among the interventions proposed to reduce unnecessary testing, Computerized Decision Support Systems (CDSS) have been shown to be effective, but their impact depends on their technical characteristics. The objective of the study was to evaluate the impact of a Serology-CDSS providing point of care reminders of previous existing serology results, embedded in a Computerized Physician Order Entry at a university teaching hospital in Paris, France.</p> <p>Methods</p> <p>A CDSS was implemented in the Cardiovascular Surgery department of the hospital in order to decrease inappropriate repetitions of viral serology tests (HBV).</p> <p>A time series analysis was performed to assess the impact of the alert on physicians' practices. The study took place between January 2004 and December 2007. The primary outcome was the proportion of unnecessarily repeated HBs antigen tests over the periods of the study. A test was considered unnecessary when it was ordered within 90 days after a previous test for the same patient. A secondary outcome was the proportion of potentially unnecessary HBs antigen test orders cancelled after an alert display.</p> <p>Results</p> <p>In the pre-intervention period, 3,480 viral serology tests were ordered, of which 538 (15.5%) were unnecessarily repeated. During the intervention period, of the 2,095 HBs antigen tests performed, 330 unnecessary repetitions (15.8%) were observed. Before the intervention, the mean proportion of unnecessarily repeated HBs antigen tests increased by 0.4% per month (absolute increase, 95% CI 0.2% to 0.6%, <it>p </it>< 0.001). After the intervention, a significant trend change occurred, with a monthly difference estimated at -0.4% (95% CI -0.7% to -0.1%, <it>p </it>= 0.02) resulting in a stable proportion of unnecessarily repeated HBs antigen tests. A total of 380 unnecessary tests were ordered among 500 alerts displayed (compliance rate 24%).</p> <p>Conclusions</p> <p>The proportion of unnecessarily repeated tests immediately dropped after CDSS implementation and remained stable, contrasting with the significant continuous increase observed before. The compliance rate confirmed the effect of the alerts. It is necessary to continue experimentation with dedicated systems in order to improve understanding of the diversity of CDSS and their impact on clinical practice.</p

    Ability of online drug databases to assist in clinical decision-making with infectious disease therapies

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Infectious disease (ID) is a dynamic field with new guidelines being adopted at a rapid rate. Clinical decision support tools (CDSTs) have proven beneficial in selecting treatment options to improve outcomes. However, there is a dearth of information on the abilities of CDSTs, such as drug information databases. This study evaluated online drug information databases when answering infectious disease-specific queries.</p> <p>Methods</p> <p>Eight subscription drug information databases: American Hospital Formulary Service Drug Information (AHFS), Clinical Pharmacology (CP), Epocrates Online Premium (EOP), Facts & Comparisons 4.0 Online (FC), Lexi-Comp (LC), Lexi-Comp with AHFS (LC-AHFS), Micromedex (MM), and PEPID PDC (PPDC) and six freely accessible: DailyMed (DM), DIOne (DIO), Epocrates Online Free (EOF), Internet Drug Index (IDI), Johns Hopkins ABX Guide (JHAG), and Medscape Drug Reference (MDR) were evaluated for their scope (presence of an answer) and completeness (on a 3-point scale) in answering 147 infectious disease-specific questions. Questions were divided among five classifications: antibacterial, antiviral, antifungal, antiparasitic, and vaccination/immunization. Classifications were further divided into categories (e.g., dosage, administration, emerging resistance, synergy, and spectrum of activity). Databases were ranked based on scope and completeness scores. ANOVA and Chi-square were used to determine differences between individual databases and between subscription and free databases.</p> <p>Results</p> <p>Scope scores revealed three discrete tiers of database performance: Tier 1 (82-77%), Tier 2 (73-65%) and Tier 3 (56-41%) which were significantly different from each other (p < 0.05). The top tier performers: MM (82%), MDR (81%), LC-AHFS (81%), AHFS (78%), and CP (77%) answered significantly more questions compared to other databases (p < 0.05). Top databases for completeness were: MM (97%), DM (96%), IDI (95%), and MDR (95%). Subscription databases performed better than free databases in all categories (p = 0.03). Databases suffered from 37 erroneous answers for an overall error rate of 1.8%.</p> <p>Conclusion</p> <p>Drug information databases used in ID practice as CDSTs can be valuable resources. MM, MDR, LC-AHFS, AHFS, and CP were shown to be superior in their scope and completeness of information, and MM, AHFS, and MDR provided no erroneous answers. There is room for improvement in all evaluated databases.</p

    Development of a validation algorithm for 'present on admission' flagging

    Get PDF
    Background. The use of routine hospital data for understanding patterns of adverse outcomes has been limited in the past by the fact that pre-existing and post-admission conditions have been indistinguishable. The use of a 'Present on Admission' (or POA) indicator to distinguish pre-existing or co-morbid conditions from those arising during the episode of care has been advocated in the US for many years as a tool to support quality assurance activities and improve the accuracy of risk adjustment methodologies. The USA, Australia and Canada now all assign a flag to indicate the timing of onset of diagnoses. For quality improvement purposes, it is the 'not-POA' diagnoses (that is, those acquired in hospital) that are of interest. Methods. Our objective was to develop an algorithm for assessing the validity of assignment of 'not-POA' flags. We undertook expert review of the International Classification of Diseases, 10th Revision, Australian Modification (ICD-10-AM) to identify conditions that could not be plausibly hospital-acquired. The resulting computer algorithm was tested against all diagnoses flagged as complications in the Victorian (Australia) Admitted Episodes Dataset, 2005/06. Measures reported include rates of appropriate assignment of the new Australian 'Condition Onset' flag by ICD chapter, and patterns of invalid flagging. Results. Of 18,418 diagnosis codes reviewed, 93.4% (n = 17,195) reflected agreement on status for flagging by at least 2 of 3 reviewers (including 64.4% unanimous agreement; Fleiss' Kappa: 0.61). In tests of the new algorithm, 96.14% of all hospital-acquired diagnosis codes flagged were found to be valid in the Victorian records analysed. A lower proportion of individual codes was judged to be acceptably flagged (76.2%), but this reflected a high proportion of codes use
    corecore