37 research outputs found

    Changes in Real-world Dispensing of ADHD Stimulants in Youth from 2019 to 2021 in California

    Get PDF
    Introduction: Attention-deficit/hyperactivity disorder (ADHD) is one of the most common pediatric neurobehavioral disorders in the U.S. Stimulants, classified as controlled substances, are commonly used for ADHD management. We conducted an analysis of real-world stimulants dispensing data to evaluate the pandemic’s impact on young patients (≤ 26 years) in California. Methods: Annual prevalence of patients on stimulants per capita across various California counties from 2019 and 2021 were analyzed and further compared across different years, sexes, and age groups. New patients initiating simulants therapy were also examined. A case study was conducted to determine the impact of socioeconomic status on patient prevalence within different quintiles in Los Angeles County using patient zip codes. Logistic regression analysis using R Project was employed to determine demographic factors associated with concurrent use of stimulants with other controlled substances. Results: There was a notable reduction in prevalence of patients ≤26 years old on stimulants during and after the pandemic per 100,000 people (777 in 2019; 743 in 2020; 751 in 2021). These decreases were more evident among the elementary and adolescent age groups. The most prevalent age group on stimulants were adolescents (12–17 years) irrespective of the pandemic. A significant rise in the number of female patients using stimulants was observed, increasing from 107,957 (35.2%) in 2019 to 121,241 (41.1%) in 2021. New patients initiating stimulants rose from 102,754 in 2020 to 106,660 in 2021, with 33.2% being young adults. In Los Angeles County, there was an increasing trend in patient prevalence from Q1 to Q5 income quintiles among patients ≥6 years. Consistently each year, the highest average income quintile exhibited the highest per capita prevalence. Age was associated with higher risk of concurrent use of benzodiazepines (OR, 1.198 [95% CI, 1.195–1.201], p \u3c 0.0001) and opioids (OR, 1.132 [95% CI, 1.130–1.134], p \u3c 0.0001) with stimulants. Discussion: Our study provides real-world information on dispensing of ADHD stimulants in California youth from 2019 to 2021. The results underscore the importance of optimizing evidence-based ADHD management in pediatric patients and young adults to mitigate disparities in the use of stimulants

    A causal inference study: The impact of the combined administration of Donepezil and Memantine on decreasing hospital and emergency department visits of Alzheimer’s disease patients

    Get PDF
    Alzheimer’s disease is the most common type of dementia that currently affects over 6.5 million people in the U.S. Currently there is no cure and the existing drug therapies attempt to delay the mental decline and improve cognitive abilities. Two of the most commonly prescribed such drugs are Donepezil and Memantine. We formally tested and confirmed the presence of a beneficial drug-drug interaction of Donepezil and Memantine using a causal inference analysis. We applied doubly robust estimators to one of the largest and high-quality medical databases to estimate the effect of two commonly prescribed Alzheimer’s disease (AD) medications, Donepezil and Memantine, on the average number of hospital or emergency department visits per year among patients diagnosed with AD. Our results show that, compared to the absence of medication scenario, the Memantine monotherapy, and the Donepezil monotherapy, the combined use of Donepezil and Memantine treatment significantly reduces the average number of hospital or emergency department visits per year by 0.078 (13.8%), 0.144 (25.5%), and 0.132 days (23.4%), respectively. The assessed decline in the average number of hospital or emergency department visits per year is consequently associated with a substantial reduction in medical costs. As of 2022, according to the Alzheimer’s Disease Association, there were over 6.5 million individuals aged 65 and older living with AD in the US alone. If patients who are currently on no drug treatment or using either Donepezil or Memantine alone were switched to the combined used of Donepezil and Memantine therapy, the average number of hospital or emergency department visits could decrease by over 613 thousand visits per year. This, in turn, would lead to a remarkable reduction in medical expenses associated with hospitalization of AD patients in the US, totaling over 940 million dollars per year

    The Potential for Enhancing the Power of Genetic Association Studies in African Americans through the Reuse of Existing Genotype Data

    Get PDF
    We consider the feasibility of reusing existing control data obtained in genetic association studies in order to reduce costs for new studies. We discuss controlling for the population differences between cases and controls that are implicit in studies utilizing external control data. We give theoretical calculations of the statistical power of a test due to Bourgain et al (Am J Human Genet 2003), applied to the problem of dealing with case-control differences in genetic ancestry related to population isolation or population admixture. Theoretical results show that there may exist bounds for the non-centrality parameter for a test of association that places limits on study power even if sample sizes can grow arbitrarily large. We apply this method to data from a multi-center, geographically-diverse, genome-wide association study of breast cancer in African-American women. Our analysis of these data shows that admixture proportions differ by center with the average fraction of European admixture ranging from approximately 20% for participants from study sites in the Eastern United States to 25% for participants from West Coast sites. However, these differences in average admixture fraction between sites are largely counterbalanced by considerable diversity in individual admixture proportion within each study site. Our results suggest that statistical correction for admixture differences is feasible for future studies of African-Americans, utilizing the existing controls from the African-American Breast Cancer study, even if case ascertainment for the future studies is not balanced over the same centers or regions that supplied the controls for the current study

    Applying a Risk-Adjustment Framework to Primary Care: Can We Improve on Existing Measures?

    No full text
    Outcome-based performance measurement and prospective payment are common features of the current managed care environment. Increasingly, primary care clinicians and health care organizations are being asked to assume financial risk for enrolled patients based on negotiated capitation rates. Therefore, the need for methods to account for differences in risk among patients enrolled in primary care organizations has become critical. Although current risk-adjustment measures represent significant advances in the measurement of morbidity in primary care populations, they may not adequately capture all the dimensions of patient risk relevant to primary care. We propose a risk-adjustment framework for primary care that incorporates clinical features related to patients’ health status and nonclinical factors related to patients’ health behaviors, psychosocial factors, and social environment. Without this broad perspective, clinicians with more unhealthy and more challenging populations are at risk of being inadequately compensated and inequitably compared with peers. The risk-adjustment framework should also be of use to health care organizations that have been mandated to deliver high-quality primary care but are lacking the necessary tools

    Promoting Academic Integrity in an Online RN-BSN Program

    No full text

    Identifying Future High-Healthcare Users: Exploring the Value of Diagnostic and Prior Utilization Information

    No full text
    Objective: Diagnosis-based risk-adjustment measures are increasingly being promoted as disease management tools. We compared the ability of several types of predictive models to identify future high-risk older people likely to benefit from disease management. Study design: Veterans Health Administration (VHA) data were used to identify veterans >=65 years of age who used healthcare services during fiscal years (FY) 1997 and 1998 and who remained alive through FY 1997. This yielded a development sample of 412_679 individuals and a validation sample of 207_294. Methods: Prospective risk-adjustment models were fitted and tested using Adjusted Clinical Groups (ACGs), Diagnostic Cost Groups (DCGs), a prior-utilization model (prior), and combined models (prior + ACGs and prior + DCGs). Prespecified high use in FY 1998 was defined as >=92 days of care (top 2.2%) for an individual (i.e. the number of days during the year in which an individual received inpatient or outpatient healthcare services). We developed a second outcome, defined as >=164 days of care (top 1.0%), to explore whether changing the criterion for high risk would affect the number of misclassifications. Results: The diagnosis-based models performed better than the prior model in identifying a subgroup of future high-cost individuals with high disease burden and chronic diseases appropriate for disease management. The combined models performed best at correctly classifying those without high use in the prospective year. The utility for efficiently identifying high-risk cases appeared limited because of the high number of individuals misclassified as future high-risk cases by all the models. Changing the criterion for high risk generally decreased the number of patients misclassified. There was little agreement between the models regarding who was identified as high risk. Conclusion: Health plans should be aware that different risk-adjustment measures may select dissimilar groups of individuals for disease management. Although diagnosis-based measures show potential as predictive modeling tools, combining a diagnosis-based measure with prior-utilization model may yield the best results.Modelling, Resource-use, Statistics

    Classification Performance of Answer-Copying Indices Under Different Types of IRT Models

    No full text
    Test fraud has recently received increased attention in the field of educational testing, and the use of comprehensive integrity analysis after test administration is recommended for investigating different types of potential test frauds. One type of test fraud involves answer copying between two examinees, and numerous statistical methods have been proposed in the literature to screen and identify unusual response similarity or irregular response patterns on multiple-choice tests. The current study examined the classification performance of answer-copying indices measured by the area under the receiver operating characteristic (ROC) curve under different item response theory (IRT) models (one- [1PL], two- [2PL], three-parameter [3PL] models, nominal response model [NRM]) using both simulated and real response vectors. The results indicated that although there is a slight increase in the performance for low amount of copying conditions (20%), when nominal response outcomes were used, these indices performed in a similar manner for 40% and 60% copying conditions when dichotomous response outcomes were utilized. The results also indicated that the performance with simulated response vectors was almost identically reproducible with real response vectors
    corecore