21 research outputs found

    The Effects of Extended Fructose Access on Relative Value and Demand for Fructose, Saccharin, and Ventral Tegmental Stimulation

    Get PDF
    Globally, food addiction (FA) is a growing area of research and is largely attributed to the availability of foods that are both energy dense and high in fats and sugars. Further, it has been suggested, that sugar and fat, when consumed frequently, have properties similar to drugs of abuse. While the validity of FA is questioned, researchers have drawn parallels between substance use disorder (SUD) and FA. For example, sugar binge models emphasize craving, withdrawal and binging as primary components of FA, which are also hallmarks of SUD. Additionally, both natural rewards, like sugars, and drug rewards act on the dopamine (DA) system, which is implicated in SUD. Currently, research on FA has largely focused on demonstrating the similarities between FA and SUD, but few studies have assessed preclinical decision-making processes when animals are exposed to extended sugar access. Substance abuse research has highlighted the importance of including non-drug alternatives to mimic real-world scenarios in which many competing alternatives are available, but similar experiments have not been implemented for FA. The current experiment implemented a controlled reinforcement ratio (CRR) task in which rats were presented with the choice between fructose and another non-drug alternative, intracranial self-stimulation (ICSS), to assess choice behavior following a fructose self-administration paradigm. Additionally, the use of ICSS in this manner challenges the rate-dependent threshold procedure that currently dominates the literature. Baseline measures of exchange rate for both fructose and saccharin as well as measures for fructose and ICSS threshold were compared to measures following fructose self-administration. Rats were assigned to a short-access (1-hr) fructose condition or long-access (6-hr) fructose condition. While 6-hr rats did not show escalation of intake, results showed that both groups exhibited a decrease in demand intensity for fructose and an increase for ICSS following fructose self-administration. Additionally, the 6-hr group exhibited an increase in ICSS demand elasticity following self-administration, but the same was not noted for the 1-hr group. Finally, a global parameter for both fructose and saccharin exchange rate provided the best model fit for these data meaning there was no difference between pre- and post- self-administration or between access groups. These results provide support for relative value theory and highlight the importance of using concurrent choice models as opposed to single schedule models when conducting SUD and FA studies

    THE MIDSESSION REVERSAL TASK WITH PIGEONS: EFFECTS OF A BRIEF DELAY BETWEEN CHOICE AND REINFORCEMENT

    Get PDF
    During a midsession reversal task, the session begins with a simple simultaneous discrimination in which one stimulus (S1) is correct and the alternate stimulus (S2) is incorrect (S1+/S2-). At the halfway point, the discrimination reverses and S2 becomes the correct choice (S2+/S1-). When choosing optimally, a pigeon should choose S1 until the first trial in which it is not reinforced and then shift to S2 (win-stay/lose-shift). With this task pigeons have been shown to respond suboptimally by anticipating the reversal (anticipatory errors) and continuing to choose S1 after the reversal (perseverative errors). This suboptimal behavior may result from a pigeon’s relative impulsivity due to the immediacy of reinforcement following choice. In other choice tasks, there is evidence that the introduction of a short delay between choice and reinforcement may decrease pigeons’ impulsivity. In the present experiment, a delay was introduced between stimulus selection and reinforcement in the midsession reversal task to assess whether anticipatory and perseverative errors decrease. The results showed a significant difference between the no-delay and delay groups for overall accuracy only during Sessions 11-20, with the no-delay group performing better than the delay group. There was no significant difference in overall accuracy during any other block of ten sessions. These results imply that the insertion of a delay may result in slower learning of this task

    Nutrition Knowledge, Attitudes, and Fruit and Vegetable Intake as Predictors of Head Start Teachers\u27 Classroom Mealtime Behaviors

    Get PDF
    OBJECTIVE: To examine the association between nutrition knowledge, attitudes, and fruit/vegetable intake among Head Start teachers and their classroom mealtime behaviors (self-reported and observed). DESIGN: Cross-sectional design using observation and survey. SETTING: Sixteen Head Start centers across Rhode Island between September, 2014 and May, 2015. PARTICIPANTS: Teachers were e-mailed about the study by directors and were recruited during on-site visits. A total of 85 participants enrolled through phone/e-mail (19%) or in person (81%). MAIN OUTCOME MEASURES: Independent variables were nutrition knowledge, attitudes, and fruit/vegetable intake. The dependent variable was classroom mealtime behaviors (self-reported and observed). ANALYSIS: Regression analyses conducted on teacher mealtime behavior were examined separately for observation and self-report, with knowledge, attitudes, and fruit and vegetable intake as independent variables entered into the models, controlling for covariates. RESULTS: Nutrition attitudes were positively associated with teacher self-reported classroom mealtime behavior total score. Neither teacher nutrition knowledge nor fruit/vegetable intake was associated with observed or self-reported classroom mealtime behavior total scores. CONCLUSION AND IMPLICATIONS: There was limited support for associations among teacher knowledge, attitudes, and fruit/vegetable intake, and teacher classroom mealtime behavior. Findings showed that teacher mealtime behavior was significantly associated with teacher experience

    Salvadorans, Guatemalans, Hondurans, and Colombians: A Scan of Needs of Recent Latin American Immigrants to the Boston Area

    Get PDF
    The 2000 U.S. Census brought confirmation of the increase of the Latino population and of the growing diversity of Latino national groups that now make this region their home. Latinos now number 428,729, a 55% increase over their numbers in 1990. In 30 years, the Latino population has increased six-fold, and from its initial concentrations in Springfield, Holyoke, and Boston its presence is now a fact across the Commonwealth. Massachusetts Latinos are also showing increasing diversity, matching that of the Northeast region and exceeding that of the nation. At the national level, Mexicans have a dominance that dwarfs all other groups: 59% of all Latinos in the US counted by the Census are Mexican. Puerto Ricans and Cubans, the next two largest groups, are many numerical steps behind. In the Northeast region, Puerto Ricans dominate but not in such an overwhelming way. They account for 40% of the region’s Latinos; there is also a salient representation of Dominicans, Salvadorans, and Colombians. In Massachusetts, Puerto Ricans compose the largest group, accounting for 46% of the Latino population, followed by Dominicans, Mexicans, Salvadorans, and Colombians. The diversity of the Latino population in Massachusetts began to be visible during the 1980’s and took frank hold in the 1990’s. Puerto Ricans arrived in the region in large numbers after World War II and settled in Springfield, Boston, Holyoke, and Lawrence. Until 2000, Puerto Ricans made up the majority of the Latino population of the state. In fact, they continue to exhibit a healthy rate of growth: 36.4% in the last 10 years. But in this period, groups of other Latin American origin have experienced even greater growth. Dominicans, Mexicans, and Central and South Americans have experienced rates of growth in the range of 60 to 70% in the last 10 years. Dominicans are the second largest group in the region, accounting for 11.6% of the Latino population. The growth of the Mexican population has also been significant, making this group the third largest in the region today

    CRISPR-UnLOCK: Multipurpose Cas9-Based Strategies for Conversion of Yeast Libraries and Strains

    Get PDF
    Citation: Roggenkamp E, Giersch RM, Wedeman E, Eaton M, Turnquist E, Schrock MN, Alkotami L, Jirakittisonthon T, Schluter-Pascua SE, Bayne GH, Wasko C, Halloran M and Finnigan GC (2017) CRISPR-UnLOCK: Multipurpose Cas9-Based Strategies for Conversion of Yeast Libraries and Strains. Front. Microbiol. 8:1773. doi: 10.3389/fmicb.2017.01773Saccharomyces cerevisiae continues to serve as a powerful model system for both basic biological research and industrial application. The development of genome-wide collections of individually manipulated strains (libraries) has allowed for high-throughput genetic screens and an emerging global view of this single-celled Eukaryote. The success of strain construction has relied on the innate ability of budding yeast to accept foreign DNA and perform homologous recombination, allowing for efficient plasmid construction (in vivo) and integration of desired sequences into the genome. The development of molecular toolkits and “integration cassettes” have provided fungal systems with a collection of strategies for tagging, deleting, or over-expressing target genes; typically, these consist of a C-terminal tag (epitope or fluorescent protein), a universal terminator sequence, and a selectable marker cassette to allow for convenient screening. However, there are logistical and technical obstacles to using these traditional genetic modules for complex strain construction (manipulation of many genomic targets in a single cell) or for the generation of entire genome-wide libraries. The recent introduction of the CRISPR/Cas gene editing technology has provided a powerful methodology for multiplexed editing in many biological systems including yeast. We have developed four distinct uses of the CRISPR biotechnology to generate yeast strains that utilizes the conversion of existing, commonly-used yeast libraries or strains. We present Cas9-based, marker-less methodologies for (i) N-terminal tagging, (ii) C-terminally tagging yeast genes with 18 unique fusions, (iii) conversion of fluorescently-tagged strains into newly engineered (or codon optimized) variants, and finally, (iv) use of a Cas9 “gene drive” system to rapidly achieve a homozygous state for a hypomorphic query allele in a diploid strain. These CRISPR-based methods demonstrate use of targeting universal sequences previously introduced into a genome

    Nutrient scavenging-fueled growth in pancreatic cancer depends on caveolae-mediated endocytosis under nutrient-deprived conditions.

    Get PDF
    Pancreatic ductal adenocarcinoma (PDAC) is characterized by its nutrient-scavenging ability, crucial for tumor progression. Here, we investigated the roles of caveolae-mediated endocytosis (CME) in PDAC progression. Analysis of patient data across diverse datasets revealed a strong association of high caveolin-1 (Cav-1) expression with higher histologic grade, the most aggressive PDAC molecular subtypes, and worse clinical outcomes. Cav-1 loss markedly promoted longer overall and tumor-free survival in a genetically engineered mouse model. Cav-1-deficient tumor cell lines exhibited significantly reduced proliferation, particularly under low nutrient conditions. Supplementing cells with albumin rescued the growth of Cav-1-proficient PDAC cells, but not in Cav-1-deficient PDAC cells under low glutamine conditions. In addition, Cav-1 depletion led to significant metabolic defects, including decreased glycolytic and mitochondrial metabolism, and downstream protein translation signaling pathways. These findings highlight the crucial role of Cav-1 and CME in fueling pancreatic tumorigenesis, sustaining tumor growth, and promoting survival through nutrient scavenging

    Would school closure for the 2009 H1N1 influenza epidemic have been worth the cost?: a computational simulation of Pennsylvania

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>During the 2009 H1N1 influenza epidemic, policy makers debated over whether, when, and how long to close schools. While closing schools could have reduced influenza transmission thereby preventing cases, deaths, and health care costs, it may also have incurred substantial costs from increased childcare needs and lost productivity by teachers and other school employees.</p> <p>Methods</p> <p>A combination of agent-based and Monte Carlo economic simulation modeling was used to determine the cost-benefit of closing schools (vs. not closing schools) for different durations (range: 1 to 8 weeks) and symptomatic case incidence triggers (range: 1 to 30) for the state of Pennsylvania during the 2009 H1N1 epidemic. Different scenarios varied the basic reproductive rate (R<sub>0</sub>) from 1.2, 1.6, to 2.0 and used case-hospitalization and case-fatality rates from the 2009 epidemic. Additional analyses determined the cost per influenza case averted of implementing school closure.</p> <p>Results</p> <p>For all scenarios explored, closing schools resulted in substantially higher net costs than not closing schools. For R<sub>0 </sub>= 1.2, 1.6, and 2.0 epidemics, closing schools for 8 weeks would have resulted in median net costs of 21.0billion(9521.0 billion (95% Range: 8.0 - 45.3billion).Themediancostperinfluenzacaseavertedwouldhavebeen45.3 billion). The median cost per influenza case averted would have been 14,185 (5,423−5,423 - 30,565) for R<sub>0 </sub>= 1.2, 25,253(25,253 (9,501 - 53,461)forR<sub>0</sub>=1.6,and53,461) for R<sub>0 </sub>= 1.6, and 23,483 (8,870−8,870 - 50,926) for R<sub>0 </sub>= 2.0.</p> <p>Conclusions</p> <p>Our study suggests that closing schools during the 2009 H1N1 epidemic could have resulted in substantial costs to society as the potential costs of lost productivity and childcare could have far outweighed the cost savings in preventing influenza cases.</p

    Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19

    Get PDF
    IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19. Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19. DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022). INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days. MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes. RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively). CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570

    Sitter Type and Their Impact on Fall Rates

    No full text
    Problem: Falls occur in 3% of hospitalized patients. Falls that are injurious increase the cost of the patient’s stay for both the individual and the hospital and it also lengthens their stay. Patients who are confused are at an increased risk of falling. One intervention used to reduce the risk of falls in confused patients is constant observation. Purpose: The purpose of the project is to determine if live sitters are more effective in decreasing falls and the length of stay for hospital patients compared to tele sitters. Methods: A quasi-experimental design will be used for this experiment. For this study, two medical floors of the hospital will be used, one utilizing tele sitters and the other using live sitters. The length of stay for each patient who requires a sitter and number of falls on each unit will be compared. Participants of this study include any patient in need of a sitter excluding those indicated for suicide. Chart reviews will be done every six months for two years to review the trends of falls on the floor. Conclusion: Constant observation may be used to prevent patient falls and decrease length of stay for patients. This impacts clinical practice by providing safer care to those who require constant observation for confusion. Through these findings, the hospitals can also save for more money by decreasing falls and better utilize staff
    corecore