31 research outputs found

    Trends in the Management of Headache Disorders in US Emergency Departments: Analysis of 2007–2018 National Hospital Ambulatory Medical Care Survey Data

    Get PDF
    We examined trends in management of headache disorders in United States (US) emergency department (ED) visits. We conducted a cross-sectional study using 2007–2018 National Hospital Ambulatory Medical Care Survey data. We included adult patient visits (≥18 years) with a primary ED discharge diagnosis of headache. We classified headache medications by pharmacological group: opioids, butalbital, ergot alkaloids/triptans, acetaminophen/nonsteroidal anti-inflammatory drugs (NSAIDs), antiemetics, diphenhydramine, corticosteroids, and intravenous fluids. To obtain reliable estimates, we aggregated data into three time periods: 2007–2010, 2011–2014, and 2015–2018. Using multivariable logistic regression, we examined medication, neuroimaging, and outpatient referral trends, separately. Among headache-related ED visits, opioid use decreased from 54.1% in 2007–2010 to 28.3% in 2015–2018 (Ptrend < 0.001). There were statistically significant increasing trends in acetaminophen/NSAIDs, diphenhydramine, and corticosteroids use (all Ptrend < 0.001). Changes in butalbital (6.4%), ergot alkaloid/triptan (4.7%), antiemetic (59.2% in 2015–2018), and neuroimaging (37.3%) use over time were insignificant. Headache-related ED visits with outpatient referral for follow-up increased slightly from 73.3% in 2007–2010 to 79.7% in 2015–2018 (Ptrend = 0.02). Reflecting evidence-based guideline recommendations for headache management, opioid use substantially decreased from 2007 to 2018 among US headache-related ED visits. Future studies are warranted to identify strategies to promote evidence-based treatment for headaches (e.g., sumatriptan, dexamethasone) and appropriate outpatient referral and reduce unnecessary neuroimaging orders in EDs

    Risk of heart failure hospitalization among users of dipeptidyl peptidase-4 inhibitors compared to glucagon-like peptide-1 receptor agonists

    No full text
    Abstract Background Incretin-based therapies including dipeptidyl peptidase-4 (DPP-4) inhibitors and glucagon like peptide-1 (GLP-1) receptor agonists are novel medications for type 2 diabetes management. Several studies have found cardioprotective effects of incretin-based therapies; however, it remains unclear whether there is any difference in heart failure (HF) risk between the two incretin-based therapies (DPP-4 inhibitors and GLP-1 receptor agonists). We aimed to assess the risk of hospitalization due to HF with the use of DPP-4 inhibitors compared to GLP-1 receptor agonists. Methods Using Truven Health Marketscan data, we conducted a retrospective cohort study of patients with type 2 diabetes, who were newly initiated on DPP-4 inhibitors or GLP-1 agonists. Follow-up continued from drug initiation until the first occurrence of: HF hospitalization (primary outcome), discontinuation of therapy (i.e. no fill for 7 days), switch to the comparator, end of enrollment, or end of study (December 2013). Cox proportional hazards models with propensity-score-matching were used to compare the risk of HF hospitalization between DPP-4 inhibitors and GLP-1 agonists. Results A total of 321,606 propensity score-matched patients were included in the analysis (n = 160,803 for DPP-4 inhibitors; n = 160,803 for GLP-1 agonists). After adjusting for baseline characteristics and disease risk factors, the use of DPP-4 inhibitors was associated with a 14% decreased risk of HF hospitalization compared to GLP-1 agonists use [hazard ratio (HR), 0.86; 95% confidence interval (CI) 0.83, 0.90]. The results were consistent in patients without baseline HF (HR, 0.85; 95% CI 0.82, 0.89), but the association was not statistically significant for patients with baseline HF (HR, 0.90; 95% CI 0.74, 1.07). Conclusion In this retrospective matched cohort of patients with type 2 diabetes, the use of DPP-4 inhibitors was associated with a reduced risk of HF hospitalization compared to GLP-1 agonists. However, the association was not statistically significant in patients who had HF prior to the use of DPP-4 inhibitors

    Risk of Obstructive Sleep Apnea in Adults with Resistant Hypertension

    No full text
    The risk of obstructive sleep apnea (OSA) in patients with resistant hypertension (RH) has not been well-quantified. We sought to evaluate the risk of OSA in patients with RH compared to those with treated but non-resistant hypertension (non-RH) using a time-dependent-exposure analysis. We conducted a retrospective cohort study of patients with treated hypertension (hypertension diagnosis + &ge;2 antihypertensive drug claims within 1 year) using the IBM MarketScan&reg; commercial claims database from January 2008 to December 2019. We excluded patients without 12 months of continuous enrollment before the second antihypertensive fill date (index date of cohort entry) and those having the outcome (OSA) in the 12-month pre-index period. We employed Cox proportional hazard regression with OSA as the dependent variable, and time-dependent exposure (non-RH vs. RH) and baseline covariates as independent variables. Of the 1,375,055 patients with treated hypertension, 13,584 patients were categorized as exposed to RH. In the multivariable Cox proportional hazards model, exposure with RH was associated with a 60% increased risk of OSA (adjusted hazard ratio (aHR): 1.60; 95% CI, 1.52&ndash;1.68) compared to non-RH exposure. Findings of the study suggest that exposure with RH, compared to non-RH, is associated with a higher risk of incident OSA

    Trends in anemia care in non-dialysis-dependent chronic kidney disease (CKD) patients in the United States (2006–2015)

    No full text
    Abstract Background The objective of the study was to examine overall anemia management trends in non-dialysis patients with chronic kidney disease (CKD) from 2006 to 2015, and to evaluate the impact of Trial to Reduced Cardiovascular Events with Ananesp Therapy (TREAT)‘s study results (October 2009) and the US Food and Drug Administration (FDA)’s (June 2011) safety warnings and guidelines on the use of ESA therapy in the current treatment of anemia. Methods A retrospective cohort analysis of anemia management in CKD patients using Truven MarketScan Commercial and Medicare Supplemental databases was conducted. Monthly rates and types of anemia treatment for post-TREAT and post-FDA safety warning periods were compared to pre-TREAT period. Anemia management included ESA, intravenous iron, and blood transfusion. A time-series analysis using Autoregressive Integrated Moving Average (ARIMA) model and a Generalized Estimating Equation (GEE) model were used. Results Between 2006 and 2015, CKD patients were increasingly less likely to be treated with ESAs, more likely to receive intravenous iron supplementation, and blood transfusions. The adjusted probabilities of prescribing ESAs were 31% (odds ratio (OR) = 0.69, 95% confidence interval (CI): 0.67–0.71) and 59% (OR = 0.41, 95% CI: 0.40, 0.42) lower in the post-TREAT and post-FDA warning periods compared to pre-TREAT period. The probability of prescribing intravenous iron was increased in the post-FDA warning period (OR = 1.11, 95% CI: 1.03–1.19) although the increase was not statistically significant in the post-TREAT period (OR = 1.03, 95% CI: 0.94–1.12). The probabilities of prescribing blood transfusion during the post-TREAT and post-FDA warning periods increased by 14% (OR = 1.14, 95% CI: 1.06–1.23) and 31% (OR = 1.31, 95% CI: 1.22–1.39), respectively. Similar trends of prescribing ESAs and iron supplementations were observed in commercially insured CKD patients but the use of blood transfusions did not increase. Conclusions After the 2011 FDA safety warnings, the use of ESA continued to decrease while the use of iron supplementation continued to increase. The use of blood transfusions increased significantly in Medicare patients while it remained stable in commercially insured patients. Results suggest the TREAT publication had effected treatment of anemia prior to the FDA warning but the FDA warning solidified TREAT’s recommendations for anemia treatment for non- dialysis dependent CKD patients

    The value of cure associated with treating treatment-naïve chronic hepatitis C genotype 1: Are the new all-oral regimens good value to society?

    No full text
    BACKGROUND & AIMS: All-oral regimens are associated with high cure rates in hepatitis C virus-genotype 1 (HCV-GT1) patients. Our aim was to assess the value of cure to the society for treating HCV infection. METHODS: Markov model for HCV-GT1 projected long-term health outcomes, life years, and quality-adjusted life years (QALYs) gained. The model compared second-generation triple (sofosbuvir+pegylated interferon+ribavirin [PR] and simeprevir+PR) and all-oral (ledipasvir/sofosbuvir and ombitasvir+paritaprevir/ritonavir+dasabuvir±ribavirin) therapies with no treatment. Sustained virological response rates were based on Phase III RCTs. We assumed that 80% and 95% of HCV-GT1 patients were eligible for second-generation triple and all-oral regimens. Transition probabilities, utility and mortality were based on literature review. The value of cure was calculated by the difference in the savings from the economic gains associated with additional QALYs. RESULTS: Model estimated 1.52 million treatment-naïve HCV-GT1 patients in the US. Treating all eligible HCV-GT1 patients with second-generation triple and all-oral therapies resulted in 3.2 million and 4.8 million additional QALYs gained compared to no treatment respectively. Using 50,000asvalueofQALY,theseregimensleadtosavingsof50,000 as value of QALY, these regimens lead to savings of 185 billion and 299billion;costsoftheseregimenswere299 billion; costs of these regimens were 109 billion and 128billion.Thevalueofcurewithsecond−generationtripleandall−oralregimenswas128 billion. The value of cure with second-generation triple and all-oral regimens was 55 billion and $111 billion, when we conservatively assumed only drug costs. Cost savings were greater for HCV-GT1 patient cured with cirrhosis compared to patients without cirrhosis. CONCLUSIONS: The recent evolution of regimens for HCV GT1 has increased efficacy and value of cure

    Trends in Recommended Screening and Monitoring Tests for Users of HIV Pre-Exposure Prophylaxis Before and During the COVID-19 Pandemic

    No full text
    Introduction: To ensure the health and safety of persons taking pre-exposure prophylaxis to prevent HIV infection, the 2017 Centers for Disease Control and Prevention guidelines recommended initial and follow-up laboratory testing. We assessed the trends in adherence rates to recommended laboratory testing among pre-exposure prophylaxis users and identified factors associated with HIV testing among pre-exposure prophylaxis users from 2016 to 2020 and also examined rate changes during the COVID-19 pandemic in 2020. Methods: We conducted a retrospective cohort study assessing the rates and trends of recommended laboratory testing among commercially insured pre-exposure prophylaxis users from 2016 to 2020, using the MarketScan database. We examined the proportion of pre-exposure prophylaxis users adhering to the following initial and follow-up laboratory testing: (1) HIV, creatinine clearance, hepatitis B virus, hepatitis C virus, and sexually transmitted infections (chlamydia/gonorrhea and syphilis) within 7 days before pre-exposure prophylaxis initiation; (2) HIV 90 days after initiation; and (3) HIV, creatinine clearance, and sexually transmitted infections 180 days after pre-exposure prophylaxis initiation. We used general linear models to examine trends and multivariable logistic regression to identify predictors of ≥1 HIV test within 180 days after index pre-exposure prophylaxis. Results: We identified 19,581 new pre-exposure prophylaxis users. Most were male (96%) and aged 18–34 years (55%). Adherence rates to recommended testing increased from 2016 through 2019 (e.g., 9.0%–13.6% for all initial screening tests 7 days before initiation, 42.1%–44.6% for HIV testing 90 days after initiation, 33.8%–40.6% for all follow-up tests within 180 days after initiation), but all rates decreased during the COVID-19 pandemic (12.4%, 33.6%, and 31.6%, respectively). Younger age (aged 13–17 years: AOR=0.44, 95% CI=0.28, 0.71) and ages 18–34 years (AOR=0.80, 95% CI=0.74, 0.86) were associated with a significantly lower likelihood of getting an HIV test within 180 days after initiation than ages 35–44 years, and female sex (AOR=0.64, 95% CI=0.55, 0.74) were associated with a significantly lower likelihood than male sex. Pre-exposure prophylaxis users with a history of sexually transmitted infections had a higher likelihood (AOR=1.27, 95% CI=1.16, 1.40) of getting tested than those without. Conclusions: Initial screening and follow-up testing rates were lower than those recommended by the Centers for Disease Control and Prevention. Public health efforts are needed to ensure that patients have access to needed laboratory testing during pandemics or natural disasters and to educate patients and clinicians about the importance of screening and monitoring tests to ensure the safety and health of pre-exposure prophylaxis users
    corecore