18 research outputs found

    A fatal case of bupropion (Zyban) hepatotoxicity with autoimmune features: Case report

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Bupropion is approved for the treatment of mood disorders and as an adjuvant medication for smoking cessation. Bupropion is generally well tolerated and considered safe. Two randomized controlled trials of bupropion therapy for smoking cessation did not report any hepatic adverse events. However, there are three reports of severe but non-fatal bupropion hepatotoxicity published in the literature.</p> <p>Case Presentation</p> <p>We present the case of a 55-year old man who presented with jaundice and severe hepatic injury approximately 6 months after starting bupropion for smoking cessation. Laboratory evaluation demonstrated a mixed picture of hepatocellular injury and cholestasis. Liver biopsy demonstrated findings consistent with severe hepatotoxic injury due to drug induced liver injury. Laboratory testing was also notable for positive autoimmune markers. The patient initially had clinical improvement with steroid therapy but eventually died of infectious complications.</p> <p>Conclusion</p> <p>This report represents the first fatal report of bupropion related hepatotoxicity and the second case of bupropion related liver injury demonstrating autoimmune features. The common use of this medication for multiple indications makes it important for physicians to consider this medication as an etiologic agent in patients with otherwise unexplained hepatocellular jaundice.</p

    Crowdsourcing hypothesis tests: Making transparent how design choices shape research results

    Get PDF
    To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer fiveoriginal research questions related to moral judgments, negotiations, and implicit cognition. Participants from two separate large samples (total N > 15,000) were then randomly assigned to complete one version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: materials from different teams renderedstatistically significant effects in opposite directions for four out of five hypotheses, with the narrowest range in estimates being d = -0.37 to +0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for two hypotheses, and a lack of support for three hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, while considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim.</div

    Seven Q-Tracks Monitors of Laboratory Quality Drive General Performance Improvement: Experience from the College of American Pathologists Q-Tracks Program 1999-2011

    No full text
    CONTEXT: Many production systems employ standardized statistical monitors that measure defect rates and cycle times, as indices of performance quality. Clinical laboratory testing, a system that produces test results, is amenable to such monitoring. OBJECTIVE: To demonstrate patterns in clinical laboratory testing defect rates and cycle time using 7 College of American Pathologists Q-Tracks program monitors. DESIGN: Subscribers measured monthly rates of outpatient order-entry errors, identification band defects, and specimen rejections; median troponin order-to-report cycle times and rates of STAT test receipt-to-report turnaround time outliers; and critical values reporting event defects, and corrected reports. From these submissions Q-Tracks program staff produced quarterly and annual reports. These charted each subscriber\u27s performance relative to other participating laboratories and aggregate and subgroup performance over time, dividing participants into best and median performers and performers with the most room to improve. Each monitor\u27s patterns of change present percentile distributions of subscribers\u27 performance in relation to monitoring durations and numbers of participating subscribers. Changes over time in defect frequencies and the cycle duration quantify effects on performance of monitor participation. RESULTS: All monitors showed significant decreases in defect rates as the 7 monitors ran variously for 6, 6, 7, 11, 12, 13, and 13 years. The most striking decreases occurred among performers who initially had the most room to improve and among subscribers who participated the longest. All 7 monitors registered significant improvement. Participation effects improved between 0.85% and 5.1% per quarter of participation. CONCLUSIONS: Using statistical quality measures, collecting data monthly, and receiving reports quarterly and yearly, subscribers to a comparative monitoring program documented significant decreases in defect rates and shortening of a cycle time for 6 to 13 years in all 7 ongoing clinical laboratory quality monitors

    Seven Q-Tracks monitors of laboratory quality drive general performance improvement: experience from the College of American Pathologists Q-Tracks program 1999-2011.

    No full text
    CONTEXT: Many production systems employ standardized statistical monitors that measure defect rates and cycle times, as indices of performance quality. Clinical laboratory testing, a system that produces test results, is amenable to such monitoring. OBJECTIVE: To demonstrate patterns in clinical laboratory testing defect rates and cycle time using 7 College of American Pathologists Q-Tracks program monitors. DESIGN: Subscribers measured monthly rates of outpatient order-entry errors, identification band defects, and specimen rejections; median troponin order-to-report cycle times and rates of STAT test receipt-to-report turnaround time outliers; and critical values reporting event defects, and corrected reports. From these submissions Q-Tracks program staff produced quarterly and annual reports. These charted each subscriber\u27s performance relative to other participating laboratories and aggregate and subgroup performance over time, dividing participants into best and median performers and performers with the most room to improve. Each monitor\u27s patterns of change present percentile distributions of subscribers\u27 performance in relation to monitoring durations and numbers of participating subscribers. Changes over time in defect frequencies and the cycle duration quantify effects on performance of monitor participation. RESULTS: All monitors showed significant decreases in defect rates as the 7 monitors ran variously for 6, 6, 7, 11, 12, 13, and 13 years. The most striking decreases occurred among performers who initially had the most room to improve and among subscribers who participated the longest. All 7 monitors registered significant improvement. Participation effects improved between 0.85% and 5.1% per quarter of participation. CONCLUSIONS: Using statistical quality measures, collecting data monthly, and receiving reports quarterly and yearly, subscribers to a comparative monitoring program documented significant decreases in defect rates and shortening of a cycle time for 6 to 13 years in all 7 ongoing clinical laboratory quality monitors
    corecore