1,536 research outputs found

    Convener

    Get PDF
    Silas House will be convening this session of three of his best students, all Appalachian Studies minors at Berea College

    The dietary intake and practices of adolescent girls in low- and middle-Income countries: A systematic review

    Get PDF
    In many low- and middle-income countries (LMICs) the double burden of malnutrition is high among adolescent girls, leading to poor health outcomes for the adolescent herself and sustained intergenerational effects. This underpins the importance of adequate dietary intake during this period of rapid biological development. The aim of this systematic review was to summarize the current dietary intake and practices among adolescent girls (10⁻19 years) in LMICs. We searched relevant databases and grey literature using MeSH terms and keywords. After applying specified inclusion and exclusion criteria, 227 articles were selected for data extraction, synthesis, and quality assessment. Of the included studies, 59% were conducted in urban populations, 78% in school settings, and dietary measures and indicators were inconsistent. Mean energy intake was lower in rural settings (1621 ± 312 kcal/day) compared to urban settings (1906 ± 507 kcal/day). Self-reported daily consumption of nutritious foods was low; on average, 16% of girls consumed dairy, 46% consumed meats, 44% consumed fruits, and 37% consumed vegetables. In contrast, energy-dense and nutrient-poor foods, like sweet snacks, salty snacks, fast foods, and sugar-sweetened beverages, were consumed four to six times per week by an average of 63%, 78%, 23%, and 49% of adolescent girls, respectively. 40% of adolescent girls reported skipping breakfast. Along with highlighting the poor dietary habits of adolescent girls in LMIC, this review emphasizes the need for consistently measured and standardized indicators, and dietary intake data that are nationally representativ

    Predictors of Chronic Opioid Therapy in Medicaid Beneficiaries with HIV Who Initiated Antiretroviral Therapy

    Get PDF
    The factors associated with chronic opioid therapy (COT) in patients with HIV is understudied. Using Medicaid data (2002-2009), this retrospective cohort study examines COT in beneficiaries with HIV who initiated standard combination anti-retroviral therapy (cART). We used generalized estimating equations on logistic regression models with backward selection to identify significant predictors of COT initiation. COT was initiated among 1014 out of 9615 beneficiaries with HIV (male: 10.4%; female: 10.7%). Those with older age, any malignancy, Hepatitis C infection, back pain, arthritis, neuropathy pain, substance use disorder, polypharmacy, (use of) benzodiazepines, gabapentinoids, antidepressants, and prior opioid therapies were positively associated with COT. In sex-stratified analyses, multiple predictors were shared between male and female beneficiaries; however, chronic obstructive pulmonary disease, liver disease, any malignancy, and antipsychotic therapy were unique to female beneficiaries. Comorbidities and polypharmacy were important predictors of COT in Medicaid beneficiaries with HIV who initiated cART

    Fetal hyperglycemia and a high fat diet contribute to aberrant glucose tolerance and hematopoiesis in adulthood

    Get PDF
    Background Children exposed to gestational diabetes mellitus (GDM) during pregnancy are at increased risk of obesity, diabetes, and hypertension. Our goal was to identify metabolic and hematopoietic alterations after intrauterine exposure to maternal hyperglycemia that may contribute to the pathogenesis of chronic morbidities. Methods Streptozotocin treatment induced maternal hyperglycemia during the last third of gestation in rat dams. Offspring of control mothers (OCM) and diabetic mothers (ODM) were evaluated for weight, glucose tolerance, insulin tolerance, and hematopoiesis defects. The effects of aging were examined in normal and high fat diet (HFD)-fed young (8-week-old) and aged (11-month-old) OCM and ODM rats. Results Young adult ODM males on a normal diet, but not females, displayed improved glucose tolerance due to increased insulin levels. Aged ODM males and females gained more weight than OCM on a HFD and had worse glucose tolerance. Aged ODM males fed a HFD were also neutrophilic. Increases in bone marrow cellularity and myeloid progenitors preceded neutrophilia in ODM males fed a HFD. Conclusion When combined with other risk factors like HFD and aging, changes in glucose metabolism and hematopoiesis may contribute to the increased risk of obesity, type 2 diabetes, and hypertension observed in children of GDM mothers

    Characterizing Patients using Abuse-deterrent Formulations of Extended-release Opioid Analgesics

    Get PDF
    Background: Abuse-deterrent formulations (ADFs) of extended-release (ER) opioids are manufactured to address opioid abuse. However, little is known about characteristics of patients who initiate ADF opioids, which is important to identify appropriate comparators to address confounding by indication. Objectives: To describe demographics and medical characteristics of patients prescribed ADF and non-ADF ER opioids in two sources of commercial claims. Methods: Using IBM Marketscan commercial claims (Data A) and a large private insurance provider in North Carolina [USA] (Data B) (both 2009-2018), we conducted a retrospective cohort study to examine patterns of ADF opioid use compared to non-ADF ER opioid use. Patients who initiated ADF and non-ADF ER opioids (18-64 years-old) were selected using both a traditional new user design (no opioid claims during the washout period, defined as six-months prior to ER opioid initiation) and a prevalent new user design (allowed non-ER opioid claims during the washout period and excluded the patients with no six-months eligibility prior to the first immediate-release (IR) opioid claim). Patient characteristics including demographics, medications (gabapentin, benzodiazepine, antidepressants, IR opioids), pain-related symptoms, and cancer were measured during the washout period for patients with ADF and non-ADF ER opioids. Results: Among eligible ER opioid initiators in Data A (N=330,728) and B (N=20,992), 31% and 34% initiated with ADF opioids, respectively. Among these patients, demographics were as follows (Data A and B): age [mean (SD)] = 49.4 (11.8) and 48.4 (11.8); male sex = 51.2% and 55.4%. Among patients with non-ADF ER opioids, demographics were as follows (Data A and Data B): age [mean (SD)] = 49.2 (11.4) and 47.8 (11.3); male sex = 45.8% and 50.4%. About 50% and 62% of patients with ADF opioids initiated with IR opioids, whereas 29%and 34% of patients with non-ADF ER opioids initiated with IR opioids in Data A and B, respectively. In both data sources, the prevalence of several types of pain was higher among patients with ADF opioids than in non-ADF ER group, including acute pain (Data A: 54.5% vs. 40.3%; Data B: 56.7% vs. 41.5%), arthritis pain (35.7% vs. 20.1%; 36.4% vs. 22.7%), and chronic pain (84.8% vs. 76.3%; 89.5% vs. 85.3%). The prevalence of use of medications and cancer was higher in patients with non-ADF ER opioids than in patients with ADF opioids in both data sources. Conclusions: Both data sources revealed differences in characteristics between patients with ADF and non-ADF ER opioids. The implications for research design include identifying appropriate comparator groups when examining ADF opioid use related outcomes

    Matching Study Design to Prescribing Intention: The Prevalent New User Design in Opioid Research

    Get PDF
    Background: In drug studies, research designs requiring no prior exposure to certain drug classes may restrict research on important populations. For example, currently marketed abuse-deterrent formulation (ADF) opioids are routinely used in patients with prior prescription opioid exposure. The traditional new user design excludes patients with prior exposure to prescription opioids, hence incident ADF users are not representative of the overall ADF user population. A prevalent new user design, wherein patients are prescribed similar treatments (or potential comparators) before starting the new treatment, likely better represents the intended ADF patient population. Objectives: To evaluate the appropriateness of new user versus prevalent new user design for estimating post-market effectiveness of ADFs and examine patterns of ADF initiation. Methods: We used pharmaceutical claims data from a large private insurer in North Carolina [USA] from 2009-2018. Included patients were new ADF users age 18-64 with 6 months of continuous enrollment prior to their first ADF claim. Incident users were identified as those with no prescription opioid claims in a 6-month washout period prior to ADF initiation. Prevalent new users were identified as those with non-ADF opioid claims during the 6 months before ADF initiation, so long as they also had a 6-month washout period of no opioid claims prior to first non-ADF opioid claim. We compared sample sizes by study design and described ADF utilization patterns. Results: We identified 8,841 eligible patients who initiated an ADF. Of these, 2,332 (26%) were classified as incident users, whereas 6,509 (74%) were prevalent new users and would be excluded in a traditional new user design. Most incident ADF users started with both an ADF and an immediate-release (IR) opioid concurrently (85%). Among prevalent new users, common ADF initiation patterns were: adding an ADF to an IR opioid regimen (43%), an immediate switch from IR opioids to an ADF (15%), and a delayed switch from IR opioids to an ADF (14%). Conclusions: Three-quarters of patients initiating ADFs had prior prescription opioid use and would be excluded in a traditional new user study design. A prevalent new user design would increase sample size and better capture clinically meaningful patients. These findings may apply to studies of other medications where prior exposure is a labeled prerequisite, such as higher dose ER opioids and second-line therapies. Future work will explore prevalent new user designs and consider nuances in ADF initiation such as immediate versus delayed switching by incorporating time-matching to address opioid tolerance

    Integrating forest structural diversity measurement into ecological research

    Get PDF
    The measurement of forest structure has evolved steadily due to advances in technology, methodology, and theory. Such advances have greatly increased our capacity to describe key forest structural elements and resulted in a range of measurement approaches from traditional analog tools such as measurement tapes to highly derived and computationally intensive methods such as advanced remote sensing tools (e.g., lidar, radar). This assortment of measurement approaches results in structural metrics unique to each method, with the caveat that metrics may be biased or constrained by the measurement approach taken. While forest structural diversity (FSD) metrics foster novel research opportunities, understanding how they are measured or derived, limitations of the measurement approach taken, as well as their biological interpretation is crucial for proper application. We review the measurement of forest structure and structural diversity—an umbrella term that includes quantification of the distribution of functional and biotic components of forests. We consider how and where these approaches can be used, the role of technology in measuring structure, how measurement impacts extend beyond research, and current limitations and potential opportunities for future research

    Residential PM2.5 exposure and the nasal methylome in children

    Get PDF
    Rationale: PM2.5-induced adverse effects on respiratory health may be driven by epigenetic modifications in airway cells. The potential impact of exposure duration on epigenetic alterations in the airways is not yet known. Objectives: We aimed to study associations of fine particulate matter PM2.5 exposure with DNA methylation in nasal cells. Methods: We conducted nasal epigenome-wide association analyses within 503 children from Project Viva (mean age 12.9 y), and examined various exposure durations (1-day, 1-week, 1-month, 3-months and 1-year) prior to nasal sampling. We used residential addresses to estimate average daily PM2.5 at 1 km resolution. We collected nasal swabs from the anterior nares and measured DNA methylation (DNAm) using the Illumina Methylation EPIC BeadChip. We tested 719,075 high quality autosomal CpGs using CpG-by-CpG and regional DNAm analyses controlling for multiple comparisons, and adjusted for maternal education, household smokers, child sex, race/ethnicity, BMI z-score, age, season at sample collection and cell-type heterogeneity. We further corrected for bias and genomic inflation. We tested for replication in a cohort from the Netherlands (PIAMA). Results: In adjusted analyses, we found 362 CpGs associated with 1-year PM2.5 (FDR < 0.05), 20 CpGs passing Bonferroni correction (P < 7.0 x 10(-8)) and 10 Differentially Methylated Regions (DMRs). In 445 PIAMA participants (mean age 16.3 years) 11 of 203 available CpGs replicated at P < 0.05. We observed differential DNAm at/ near genes implicated in cell cycle, immune and inflammatory responses. There were no CpGs or regions associated with PM2.5 levels at 1-day, 1-week, or 1-month prior to sample collection, although 2 CpGs were associated with past 3-month PM2.5. Conclusion: We observed wide-spread DNAm variability associated with average past year PM2.5 exposure but we did not detect associations with shorter-term exposure. Our results suggest that nasal DNAm marks reflect chronic air pollution exposure

    Systematic review and meta-analysis of intravascular temperature management vs. surface cooling in comatose patients resuscitated from cardiac arrest

    Get PDF
    Objective: To systematically review the effectiveness and safety of intravascular temperature management (IVTM) vs. surface cooling methods (SCM) for induced hypothermia (IH). Methods: Systematic review and meta-analysis. English-language PubMed, Embase and the Cochrane Database of Systematic Reviews were searched on May 27, 2019. The quality of included observational studies was graded using the Newcastle-Ottawa Quality Assessment tool. The quality of included randomized trials was evaluated using the Cochrane Collaboration's risk of bias tool. Random effects modeling was used to calculate risk differences for each outcome. Statistical heterogeneity and publication bias were assessed using standard methods. Eligibility: Observational or randomized studies comparing survival and/or neurologic outcomes in adults aged 18 years or greater resuscitated from out-of-hospital cardiac arrest receiving IH via IVTM vs. SCM were eligible for inclusion. Results: In total, 12 studies met inclusion criteria. These enrolled 1573 patients who received IVTM; and 4008 who received SCM. Survival was 55.0% in the IVTM group and 51.2% in the SCM group [pooled risk difference 2% (95% CI - 1%, 5%)] Good neurological outcome was achieved in 40.9% in the IVTM and 29.5% in the surface group [pooled risk difference 5% (95% CI 2%, 8%)]. There was a 6% (95% CI 11%, 2%) lower risk of arrhythmia with use of IVTM and 15% (95% CI 22%, 7%) decreased risk of overcooling with use of IVTM vs. SCM. There was no significant difference in other evaluated adverse events between groups. Conclusions: IVTM was associated with improved neurological outcomes vs. SCM among survivors resuscitated following cardiac arrest. These results may have implications for care of patients in the emergency department and intensive care settings after resuscitation from cardiac arrest.Peer reviewe

    Advancing specificity in delirium: The delirium subtyping initiative

    Get PDF
    BACKGROUND: Delirium, a common syndrome with heterogeneous etiologies and clinical presentations, is associated with poor long-term outcomes. Recording and analyzing all delirium equally could be hindering the field's understanding of pathophysiology and identification of targeted treatments. Current delirium subtyping methods reflect clinically evident features but likely do not account for underlying biology. METHODS: The Delirium Subtyping Initiative (DSI) held three sessions with an international panel of 25 experts. RESULTS: Meeting participants suggest further characterization of delirium features to complement the existing Diagnostic and Statistical Manual of Mental Disorders Fifth Edition Text Revision diagnostic criteria. These should span the range of delirium-spectrum syndromes and be measured consistently across studies. Clinical features should be recorded in conjunction with biospecimen collection, where feasible, in a standardized way, to determine temporal associations of biology coincident with clinical fluctuations. DISCUSSION: The DSI made recommendations spanning the breadth of delirium research including clinical features, study planning, data collection, and data analysis for characterization of candidate delirium subtypes. HIGHLIGHTS: Delirium features must be clearly defined, standardized, and operationalized. Large datasets incorporating both clinical and biomarker variables should be analyzed together. Delirium screening should incorporate communication and reasoning
    corecore