17 research outputs found

    Cost-Effectiveness of Alternative Blood-Screening Strategies for West Nile Virus in the United States

    Get PDF
    BACKGROUND: West Nile virus (WNV) is endemic in the US, varying seasonally and by geographic region. WNV can be transmitted by blood transfusion, and mandatory screening of blood for WNV was recently introduced throughout the US. Guidelines for selecting cost-effective strategies for screening blood for WNV do not exist. METHODS AND FINDINGS: We conducted a cost-effectiveness analysis for screening blood for WNV using a computer-based mathematical model, and using data from prospective studies, retrospective studies, and published literature. For three geographic areas with varying WNV-transmission intensity and length of transmission season, the model was used to estimate lifetime costs, quality-adjusted life expectancy, and incremental cost-effectiveness ratios associated with alternative screening strategies in a target population of blood-transfusion recipients. We compared the status quo (baseline screening using a donor questionnaire) to several strategies which differed by nucleic acid testing of either pooled or individual samples, universal versus targeted screening of donations designated for immunocompromised patients, and seasonal versus year-long screening. In low-transmission areas with short WNV seasons, screening by questionnaire alone was the most cost-effective strategy. In areas with high levels of WNV transmission, seasonal screening of individual samples and restricting screening to blood donations designated for immunocompromised recipients was the most cost-effective strategy. Seasonal screening of the entire recipient pool added minimal clinical benefit, with incremental cost-effectiveness ratios exceeding US$1.7 million per quality-adjusted life-year gained. Year-round screening offered no additional benefit compared to seasonal screening in any of the transmission settings. CONCLUSIONS: In areas with high levels of WNV transmission, seasonal screening of individual samples and restricting screening to blood donations designated for immunocompromised recipients is cost saving. In areas with low levels of infection, a status-quo strategy using a standard questionnaire is cost-effective

    Excess mortality in US Veterans during the COVID-19 pandemic: an individual-level cohort study.

    Get PDF
    BACKGROUND: Most analyses of excess mortality during the COVID-19 pandemic have employed aggregate data. Individual-level data from the largest integrated healthcare system in the US may enhance understanding of excess mortality. METHODS: We performed an observational cohort study following patients receiving care from the Department of Veterans Affairs (VA) between 1 March 2018 and 28 February 2022. We estimated excess mortality on an absolute scale (i.e. excess mortality rates, number of excess deaths) and a relative scale by measuring the hazard ratio (HR) for mortality comparing pandemic and pre-pandemic periods, overall and within demographic and clinical subgroups. Comorbidity burden and frailty were measured using the Charlson Comorbidity Index and Veterans Aging Cohort Study Index, respectively. RESULTS: Of 5 905 747 patients, the median age was 65.8 years and 91% were men. Overall, the excess mortality rate was 10.0 deaths/1000 person-years (PY), with a total of 103 164 excess deaths and pandemic HR of 1.25 (95% CI 1.25-1.26). Excess mortality rates were highest among the most frail patients (52.0/1000 PY) and those with the highest comorbidity burden (16.3/1000 PY). However, the largest relative mortality increases were observed among the least frail (HR 1.31, 95% CI 1.30-1.32) and those with the lowest comorbidity burden (HR 1.44, 95% CI 1.43-1.46). CONCLUSIONS: Individual-level data offered crucial clinical and operational insights into US excess mortality patterns during the COVID-19 pandemic. Notable differences emerged among clinical risk groups, emphasizing the need for reporting excess mortality in both absolute and relative terms to inform resource allocation in future outbreaks

    Angiogenesis inhibitor therapies for advanced renal cell carcinoma: Toxicity and treatment patterns in clinical practice from a global medical chart review

    Get PDF
    The aim of this study was to assess the treatment patterns and safety of sunitinib, sorafenib and bevacizumab in real-world clinical settings in US, Europe and Asia. Medical records were abstracted at 18 community oncology clinics in the US and at 21 tertiary oncology centers in US, Europe and Asia for 883 patients ≥18 years who had histologically/cytologically confirmed diagnosis of advanced RCC and received sunitinib (n=631), sorafenib (n=207) or bevacizumab (n=45) as first‑line treatment. No prior treatment was permitted. Data were collected on all adverse events (AEs) and treatment modifications, including discontinuation, interruption and dose reduction. Treatment duration was estimated using Kaplan-Meier analysis. Demographics were similar across treatment groups and regions. Median treatment duration ranged from 6.1 to 10.7 months, 5.1 to 8.5 months and 7.5 to 9.8 months for sunitinib, sorafenib and bevacizumab patients, respectively. Grade 3/4 AEs were experienced by 26.0, 28.0 and 15.6% of sunitinib, sorafenib and bevacizumab patients, respectively. Treatment discontinuations occurred in 62.4 (Asia) to 63.1% (US) sunitinib, 68.8 (Asia) to 90.0% (Europe) sorafenib, and 66.7 (Asia) to 81.8% (US) bevacizumab patients. Globally, treatment modifications due to AEs occurred in 55.1, 54.2 and 50.0% sunitinib, sorafenib and bevacizumab patients, respectively. This study in a large, global cohort of advanced RCC patients found that angiogenesis inhibitors are associated with high rates of AEs and treatment modifications. Findings suggest an unmet need for more tolerable agents for RCC treatment

    Public Health Impact of Complete and Incomplete Rotavirus Vaccination among Commercially and Medicaid Insured Children in the United States.

    No full text
    BACKGROUND:This study (NCT01682005) aims to assess clinical and cost impacts of complete and incomplete rotavirus (RV) vaccination. METHODS:Beneficiaries who continuously received medical and pharmacy benefits since birth were identified separately in Truven Commercial Claims and Encounters (2000-2011) and Truven Medicaid Claims (2002-2010) and observed until the first of end of insurance eligibility or five years. Infants with ≥1 RV vaccine within the vaccination window (6 weeks-8 months) were divided into completely and incompletely vaccinated cohorts. Historically unvaccinated (before 2007) and contemporarily unvaccinated (2007 and after) cohorts included children without RV vaccine. Claims with International Classification of Disease 9th edition (ICD-9) codes for diarrhea and RV were identified. First RV episode incidence, RV-related and diarrhea-related healthcare resource utilization after 8 months old were calculated and compared across groups. Poisson regressions were used to generate incidence rates with 95% confidence intervals (CIs). Mean total, inpatient, outpatient and emergency room costs for first RV and diarrhea episodes were calculated; bootstrapping was used to construct 95% CIs to evaluate cost differences. RESULTS:1,069,485 Commercial and 515,557 Medicaid patients met inclusion criteria. Among commercially insured, RV incidence per 10,000 person-years was 3.3 (95% CI 2.8-3.9) for completely, 4.0 (95% CI 3.3-5.0) for incompletely vaccinated, and 20.9 (95% CI 19.5-22.4) for contemporarily and 40.3 (95% CI 38.6-42.1) for historically unvaccinated. Rates in Medicaid were 7.5 (95% CI 4.8-11.8) for completely, 9.0 (95% CI 6.5-12.3) for incompletely vaccinated, and 14.6 (95% CI 12.8-16.7) for contemporarily and 52.0 (95% CI 50.2-53.8) for historically unvaccinated. Mean cost for first RV episode per cohort member was 15.33(9515.33 (95% CI 12.99-18.03)and18.03) and 4.26 (9595% CI 2.34-$6.35) lower for completely vaccinated versus contemporarily unvaccinated in Commercial and Medicaid, respectively. CONCLUSIONS:RV vaccination results in significant reduction in RV infection. There is evidence of indirect benefit to unvaccinated individuals

    Potential Donor Time for an Infected Individual

    No full text
    <p>A potential blood donor who develops WNV symptoms is viremic and eligible to donate for a longer period of time as the latent period decreases and the incubation period increases. A potential blood donor who recovers from symptoms in a short amount of time may be eligible to donate before viremia ends.</p

    Post-Transfusion Health States

    No full text
    <p>Five post-transfusion health states were identified. Following transfusion, individuals entered the uninfected or asymptomatic infection state, febrile-illness state, or NI state and progressed to other health states in the direction of the arrows. Individuals were followed until death.</p

    Hypertension Control During the Coronavirus Disease 2019 Pandemic: A Cohort Study Among U.S. Veterans

    No full text
    DESIGN: Retrospective cohort study. OBJECTIVE: We sought to examine whether disruptions in follow-up intervals contributed to hypertension control. BACKGROUND: Disruptions in health care were widespread during the coronavirus disease 2019 pandemic. PATIENTS AND METHODS: We identified a cohort of individuals with hypertension in both prepandemic (March 2019-February 2020) and pandemic periods (March 2020-February 2022) in the Veterans Health Administration. First, we calculated follow-up intervals between the last prepandemic and first pandemic blood pressure measurement during a primary care clinic visit, and between measurements in the prepandemic period. Next, we estimated the association between the maintenance of (or achieving) hypertension control and the period using generalized estimating equations. We assessed associations between follow-up interval and control separately for periods. Finally, we evaluated the interaction between period and follow-up length. RESULTS: A total of 1,648,424 individuals met the study inclusion criteria. Among individuals with controlled hypertension, the likelihood of maintaining control was lower during the pandemic versus the prepandemic (relative risk: 0.93; 95% CI: 0.93, 0.93). Longer follow-up intervals were associated with a decreasing likelihood of maintaining controlled hypertension in both periods. Accounting for follow-up intervals, the likelihood of maintaining control was 2% lower during the pandemic versus the prepandemic. For uncontrolled hypertension, the likelihood of gaining control was modestly higher during the pandemic versus the prepandemic (relative risk: 1.01; 95% CI: 1.01, 1.01). The likelihood of gaining control decreased with follow-up length during the prepandemic but not pandemic. CONCLUSIONS: During the pandemic, longer follow-up between measurements contributed to the lower likelihood of maintaining control. Those with uncontrolled hypertension were modestly more likely to gain control in the pandemic
    corecore