1,785 research outputs found
Ethanol-HIV Stimulates Macrophage-derived Extracellular Vesicles to Promote a Profibrotic Phenotype in Hepatic Stellate Cells
Liver fibrosis is the scarring process where excessive extracellular matrix proteins occur and can be caused by exposure to certain toxins or compounds such as alcohol. Alcohol can lead to increased fibrosis and cirrhosis in people living with HIV due to its ability to influence the liverâs microenvironment. Extracellular vesicles (EVs) communicate between cells by transferring their cargo. Under stress, macrophages can communicate with hepatic cells by releasing EVs and potentially progressing liver disease. The current study examines how ethanol affects EVs production from HIV-infected macrophages and how macrophage-derived EVs modulate profibrotic phenotype in hepatic stellate cells. Monocyte-derived macrophages (MDM) were infected with HIV and then exposed to 50 mM EtOH during incubation. The THP-1 monocytes were differentiated to macrophages with PMA (5 ng/mL) before alcohol and HIV treatment. The medium from the macrophages was collected for ultracentrifugation to isolate the EVs. The EVs were quantified using Nanoparticle tracking analysis (NTA). Transcriptional expression of genes was performed with qPCR. LX-2 hepatic stellate cells were exposed to macrophage-derived EVs from different treatment groups to assess profibrotic activation. Ethanol treatment in HIV-infected macrophages increased the production of EVs compared to their respective controls. The majority of the EVs from the MDM cells were in the range of small EVs (50-200 nm). Exposure of EtOH-HIV-induced macrophage EVs to LX2 cells significantly increased the transcriptional expression of profibrotic genes Col1A1, ACTA2, and CTGF. Combined treatment of EtOH and HIV in macrophages downregulated the hsa-miR92a-3p expression in macrophage-derived EVs that binds with its putative target Col1A1 to increase fibrotic changes in recipient LX-2 cells. The findings of this study lead to the conclusion that a combination of ethanol and HIV stimulates macrophage derived EVs with the downregulation of miR92a, which will activate the profibrotic phenotype in hepatic stellate cells. This activation will contribute to the progression of liver disease.https://digitalcommons.unmc.edu/surp2021/1022/thumbnail.jp
Neurofeedback of visual food cue reactivity: a potential avenue to alter incentive sensitization and craving
FMRI-based neurofeedback transforms functional brain activation in real-time into sensory stimuli that participants can use to self-regulate brain responses, which can aid the modification of mental states and behavior. Emerging evidence supports the clinical utility of neurofeedback-guided up-regulation of hypoactive networks. In contrast, down-regulation of hyperactive neural circuits appears more difficult to achieve. There are conditions though, in which down-regulation would be clinically useful, including dysfunctional motivational states elicited by salient reward cues, such as food or drug craving. In this proof-of-concept study, 10 healthy females (mean age = 21.40 years, mean BMI = 23.53) who had fasted for 4 h underwent a novel âmotivational neurofeedbackâ training in which they learned to down-regulate brain activation during exposure to appetitive food pictures. FMRI feedback was given from individually determined target areas and through decreases/increases in food picture size, thus providing salient motivational consequences in terms of cue approach/avoidance. Our preliminary findings suggest that motivational neurofeedback is associated with functionally specific activation decreases in diverse cortical/subcortical regions, including key motivational areas. There was also preliminary evidence for a reduction of hunger after neurofeedback and an association between down-regulation success and the degree of hunger reduction. Decreasing neural cue responses by motivational neurofeedback may provide a useful extension of existing behavioral methods that aim to modulate cue reactivity. Our pilot findings indicate that reduction of neural cue reactivity is not achieved by top-down regulation but arises in a bottom-up manner, possibly through implicit operant shaping of target area activity
Stopping to food can reduce intake. Effects of stimulus-specificity and individual differences in dietary restraint
types: JOURNAL ARTICLECopyright Š 2014 The Authors. Published by Elsevier Ltd.Overeating in our food-rich environment is a key contributor to obesity. Computerised response-inhibition training could improve self-control in individuals who overeat. Evidence suggests that training people to inhibit motor responses to specific food pictures can reduce the subsequent choice and consumption of those foods. Here we undertook three experiments using the stop-signal task to examine the effects of food and non-food related stop-training on immediate snack food consumption. The experiments examined whether training effects were stimulus-specific, whether they were influenced by the comparator (control) group, and whether they were moderated by individual differences in dietary restraint. Experiment 1 revealed lower intake of one food following stop- vs. double- (two key-presses) response training to food pictures. Experiment 2 offered two foods, one of which was not associated with stopping, to enable within- and between-subjects comparisons of intake. A second control condition required participants to ignore signals and respond with one key-press to all pictures. There was no overall effect of training on intake in Experiment 2, but there was a marginally significant moderation by dietary restraint: Restrained eaters ate significantly less signal-food following stop- relative to double-response training. Experiment 3 revealed that stop- vs. double-response training to non-food pictures had no effect on food intake. Taken together with previous findings, these results suggest some stimulus-specific effects of stop-training on food intake that may be moderated by individual differences in dietary restraint.Wales Institute of Cognitive NeuroscienceBBSRCESRCERCThe UK Experimental Psychology Societ
When do children learn how to select a portion size?
The reduction of portion sizes supports weight-loss. This study looks at whether children have a conceptual understanding of portion size, by studying their ability to manually serve a portion size that corresponds to what they eat. In a clinical setting, discussion around portion size is subjective thus a computerised portion size tool is also trialled, with the portion sizes chosen on the screen being compared to amounts served manually. Children (n=76) age 5-6, 7-8 and 10-11 were asked to rate their hunger (VAS scale), liking (VAS scale) and âideal portion size for lunchâ of eight interactive meal images using a computerised portion size tool. Children then manually self-served and consumed a portion of pasta. Plates were weighed to allow for the calculation of calories served and eaten. A positive correlation was found between manually served food portions and the amount eaten (r =.53, 95%CI [.34, .82, P<.001), indicating that many children were able to anticipate their likely food intake prior to meal onset. A regression model demonstrates that age contributes to 9.4% of the variance in portion size accuracy (t(68)= -2.3, p=.02). There was no relationship between portion size and either hunger or liking. The portion sizes chosen on the computer at lunchtime correlated to the amount manually served overall (r=.34, 95%CI [.07, .55], p<.01), but not in 5-6-year-old children. Manual portion-size selection can be observed in five-year olds and from age seven, childrenâs âvirtualâ responses correlate with their manual portion selections. The application of the computerised portion-size tool requires further development but offers considerable potential
Do restrained eaters show increased BMI, food craving and disinhibited eating? A comparison of the Restraint Scale and the Restrained Eating scale of the Dutch Eating Behaviour Questionnaire
Despite being used interchangeably, different measures of restrained eating have been associated with different dietary behaviours. These differences have impeded replicability across the restraint literature and have made it difficult for researchers to interpret results and use the most appropriate measure for their research. Across a total sample of 1731 participants, this study compared the Restraint Scale (RS), and its subscales, to the Dutch Eating Behaviour Questionnaire (DEBQ) across several traits related to overeating. The aim was to explore potential differences between these two questionnaires so that we could help to identify the most suitable measure as a prescreening tool for eating-related interventions. Results revealed that although the two measures are highly correlated with one another (rs = 0.73â0.79), the RS was more strongly associated with external (rs = â0.07 to 0.11 versus â0.18 to â0.01) and disinhibited eating (rs = 0.46 versus 0.31), food craving (rs = 0.12â0.27 versus 0.02â0.13 and 0.22 versus â0.06) and body mass index (rs = 0.25â0.34 versus â0.13 to 0.15). The results suggest that, compared to the DEBQ, the RS is a more appropriate measure for identifying individuals who struggle the most to control their food intake
Heart Rate Response During Mission-Critical Tasks After Space Flight
Adaptation to microgravity could impair crewmembers? ability to perform required tasks upon entry into a gravity environment, such as return to Earth, or during extraterrestrial exploration. Historically, data have been collected in a controlled testing environment, but it is unclear whether these physiologic measures result in changes in functional performance. NASA?s Functional Task Test (FTT) aims to investigate whether adaptation to microgravity increases physiologic stress and impairs performance during mission-critical tasks. PURPOSE: To determine whether the well-accepted postflight tachycardia observed during standard laboratory tests also would be observed during simulations of mission-critical tasks during and after recovery from short-duration spaceflight. METHODS: Five astronauts participated in the FTT 30 days before launch, on landing day, and 1, 6, and 30 days after landing. Mean heart rate (HR) was measured during 5 simulations of mission-critical tasks: rising from (1) a chair or (2) recumbent seated position followed by walking through an obstacle course (egress from a space vehicle), (3) translating graduated masses from one location to another (geological sample collection), (4) walking on a treadmill at 6.4 km/h (ambulation on planetary surface), and (5) climbing 40 steps on a passive treadmill ladder (ingress to lander). For tasks 1, 2, 3, and 5, astronauts were encouraged to complete the task as quickly as possible. Time to complete tasks and mean HR during each task were analyzed using repeated measures ANOVA and ANCOVA respectively, in which task duration was a covariate. RESULTS: Landing day HR was higher (P < 0.05) than preflight during the upright seat egress (7%+/-3), treadmill walk (13%+/-3) and ladder climb (10%+/-4), and HR remained elevated during the treadmill walk 1 day after landing. During tasks in which HR was not elevated on landing day, task duration was significantly greater on landing day (recumbent seat egress: 25%+/-14 and mass translation: 26%+/-12; P < 0.05). CONCLUSION: Elevated HR and increased task duration during postflight simulations of mission-critical tasks is suggestive of spaceflight-induced deconditioning. Following short-duration microgravity missions (< 16 d), work performance may be transiently impaired, but recovery is rapid
Grassroots Training for Reproducible Science:A Consortium-Based Approach to the Empirical Dissertation
The publisher's final version this work can be found at https://dx.doi.org/10.1177/1475725719857659. Deposited by openaccessbutton.org. We've taken reasonable steps to ensure this content doesn't violate copyright, however, if you think it does you can request a takedown by emailing [email protected]
Attrition from Web-Based Cognitive Testing:A Repeated Measures Comparison of Gamification Techniques
This is the author accepted manuscript. The final version is available from JIMR Publications via the DOI in this record.Background: The prospect of assessing cognition longitudinally and remotely is attractive to
researchers, health practitioners and pharmaceutical companies alike. However, such repeatedtesting
regimes place a considerable burden on participants, and with cognitive tasks typically being
regarded as effortful and unengaging, these studies may experience high levels of participant
attrition. One potential solution is to gamify these tasks to make them more engaging: increasing
participant willingness to take part and reducing attrition. However, such an approach must
balance task validity with the introduction of entertaining gamelike elements.
Objectives: We set out to investigate the effects of gamelike features on participant attrition using
a between-subjects, longitudinal online testing study.
Methods: We used three variants of a common cognitive task, the stop signal task, with a single
gamelike feature in each: one variant where points were rewarded for performing optimally,
another where the task was given a graphical theme, and a third variant which was a standard stop
signal task and served as a control condition. Participants completed four compulsory test sessions
over four consecutive days before entering a six-day voluntary testing period where they faced a
daily decision to either drop out or continue taking part. Participants were paid for each session
they completed.
Results: 482 participants signed up to take part in the study, with 265 completing the requisite four
consecutive test sessions. We saw no evidence for an effect of gamification on attrition. A log-rank
test showed no evidence of a difference in dropout rates between task variants (X
2 (2, N = 265) =
3.022, p = .22) and a one-way ANOVA of the mean number of sessions completed per participant in each variant also showed no evidence for a difference (F [2,262] = 1.534, p = .21, partial Ρ2 = 0.012.
Conclusions: Our findings raise doubts about the ability of gamification to reduce attrition from
longitudinal cognitive testing studies.Funding from British Heart Foundation, Cancer Research UK, Economic and Social Research Council,
Medical Research Council, and the National Institute for Health Research, under the auspices of the
UK Clinical Research Collaboration, is gratefully acknowledged. This work was supported by the
Medical Research Council (MC_UU_12013/6 and MC_UU_12013/7), and a PhD studentship to JL
funded by the Economic and Social Research Council and Cambridge Cognition Limited
Recommended from our members
The TeleStroke Mimic (TM)âScore: A Prediction Rule for Identifying Stroke Mimics Evaluated in a Telestroke Network
Background: Up to 30% of acute stroke evaluations are deemed stroke mimics (SM). As telestroke consultation expands across the world, increasing numbers of SM patients are likely being evaluated via Telestroke. We developed a model to prospectively identify ischemic SMs during Telestroke evaluation. Methods and Results: We analyzed 829 consecutive patients from January 2004 to April 2013 in our internal New Englandâbased Partners TeleStroke Network for a derivation cohort, and 332 cases for internal validation. External validation was performed on 226 cases from January 2008 to August 2012 in the Partners National TeleStroke Network. A predictive score was developed using stepwise logistic regression, and its performance was assessed using receiverâoperating characteristic (ROC) curve analysis. There were 23% SM in the derivation, 24% in the internal, and 22% in external validation cohorts based on final clinical diagnosis. Compared to those with ischemic cerebrovascular disease (iCVD), SM had lower mean age, fewer vascular risk factors, more frequent prior seizure, and a different profile of presenting symptoms. The TeleStroke Mimic Score (TMâScore) was based on factors independently associated with SM status including age, medical history (atrial fibrillation, hypertension, seizures), facial weakness, and National Institutes of Health Stroke Scale >14. The TMâScore performed well on ROC curve analysis (derivation cohort AUC=0.75, internal validation AUC=0.71, external validation AUC=0.77). Conclusions: SMs differ substantially from their iCVD counterparts in their vascular risk profiles and other characteristics. Decisionâsupport tools based on predictive models, such as our TM Score, may help clinicians consider alternate diagnosis and potentially detect SMs during complex, timeâcritical telestroke evaluations
- âŚ