4 research outputs found

    Harms in Systematic Reviews Paper 2: Methods used to assess harms are neglected in systematic reviews of gabapentin

    Get PDF
    Objective: We compared methods used with current recommendations for synthesizing harms in systematic reviews and meta-analyses (SRMAs) of gabapentin. Study Design & Setting: We followed recommended systematic review practices. We selected reliable SRMAs of gabapentin (i.e., met a pre-defined list of methodological criteria) that assessed at least one harm. We extracted and compared methods in four areas: pre-specification, searching, analysis, and reporting. Whereas our focus in this paper is on the methods used, Part 2 examines the results for harms across reviews. Results: We screened 4320 records and identified 157 SRMAs of gabapentin, 70 of which were reliable. Most reliable reviews (51/70; 73%) reported following a general guideline for SRMA conduct or reporting, but none reported following recommendations specifically for synthesizing harms. Across all domains assessed, review methods were designed to address questions of benefit and rarely included the additional methods that are recommended for evaluating harms. Conclusion: Approaches to assessing harms in SRMAs we examined are tokenistic and unlikely to produce valid summaries of harms to guide decisions. A paradigm shift is needed. At a minimal, reviewers should describe any limitations to their assessment of harms and provide clearer descriptions of methods for synthesizing harms

    Harms in Systematic Reviews Paper 3: Given the same data sources, systematic reviews of gabapentin have different results for harms

    Get PDF
    Objective: In this methodologic study (Part 2 of 2), we examined the overlap in sources of evidence and the corresponding results for harms in systematic reviews for gabapentin. Study Design & Setting: We extracted all citations referenced as sources of evidence for harms of gabapentin from 70 systematic reviews, as well as the harms assessed and numerical results. We assessed consistency of harms between pairs of reviews with a high degree of overlap in sources of evidence (>50%) as determined by corrected covered area (CCA). Results: We found 514 reports cited across 70 included reviews. Most reports (244/514, 48%) were not cited in more than one review. Among 18 pairs of reviews, we found reviews had differences in which harms were assessed and their choice to meta-analyze estimates or present descriptive summaries. When a specific harm was meta-analyzed in a pair of reviews, we found similar effect estimates. Conclusion: Differences in harms results across reviews can occur because the choice of harms is driven by reviewer preferences, rather than standardized approaches to selecting harms for assessment. A paradigm shift is needed in the current approach to synthesizing harms

    Tracking health system performance in times of crisis using routine health data: lessons learned from a multicountry consortium

    Get PDF
    COVID-19 has prompted the use of readily available administrative data to track health system performance in times of crisis and to monitor disruptions in essential healthcare services. In this commentary we describe our experience working with these data and lessons learned across countries. Since April 2020, the Quality Evidence for Health System Transformation (QuEST) network has used administrative data and routine health information systems (RHIS) to assess health system performance during COVID-19 in Chile, Ethiopia, Ghana, Haiti, Lao People's Democratic Republic, Mexico, Nepal, South Africa, Republic of Korea and Thailand. We compiled a large set of indicators related to common health conditions for the purpose of multicountry comparisons. The study compiled 73 indicators. A total of 43% of the indicators compiled pertained to reproductive, maternal, newborn and child health (RMNCH). Only 12% of the indicators were related to hypertension, diabetes or cancer care. We also found few indicators related to mental health services and outcomes within these data systems. Moreover, 72% of the indicators compiled were related to volume of services delivered, 18% to health outcomes and only 10% to the quality of processes of care. While several datasets were complete or near-complete censuses of all health facilities in the country, others excluded some facility types or population groups. In some countries, RHIS did not capture services delivered through non-visit or nonconventional care during COVID-19, such as telemedicine. We propose the following recommendations to improve the analysis of administrative and RHIS data to track health system performance in times of crisis: ensure the scope of health conditions covered is aligned with the burden of disease, increase the number of indicators related to quality of care and health outcomes; incorporate data on nonconventional care such as telehealth; continue improving data quality and expand reporting from private sector facilities; move towards collecting patient-level data through electronic health records to facilitate quality-of-care assessment and equity analyses; implement more resilient and standardized health information technologies; reduce delays and loosen restrictions for researchers to access the data; complement routine data with patient-reported data; and employ mixed methods to better understand the underlying causes of service disruptions
    corecore