36 research outputs found

    Cognitive Training and Transcranial Direct Current Stimulation in Mild Cognitive Impairment: A Randomized Pilot Trial

    Get PDF
    BackgroundTranscranial direct current stimulation (tDCS), a non-invasive stimulation, represents a potential intervention to enhance cognition across clinical populations including Alzheimer’s disease and mild cognitive impairment (MCI). This randomized clinical trial in MCI investigated the effects of anodal tDCS (a-tDCS) delivered to left inferior frontal gyrus (IFG) combined with gist-reasoning training (SMART) versus sham tDCS (s-tDCS) plus SMART on measures of cognitive and neural changes in resting cerebral blood flow (rCBF). We were also interested in SMART effects on cognitive performance regardless of the tDCS group.MethodsTwenty-two MCI participants, who completed the baseline cognitive assessment (T1), were randomized into one of two groups: a-tDCS + SMART and s-tDCS + SMART. Of which, 20 participants completed resting pCASL MRI scan to measure rCBF. Eight SMART sessions were administered over 4 weeks with a-tDCS or s-tDCS stimulation for 20 min before each session. Participants were assessed immediately (T2) and 3-months after training (T3).ResultsSignificant group × time interactions showed cognitive gains at T2 in executive function (EF) measure of inhibition [DKEFS- Color word (p = 0.047)], innovation [TOSL (p = 0.01)] and on episodic memory [TOSL (p = 0.048)] in s-tDCS + SMART but not in a-tDCS + SMART group. Nonetheless, the gains did not persist for 3 months (T3) after the training. A voxel-based analysis showed significant increase in regional rCBF in the right middle frontal cortex (MFC) (cluster-wise p = 0.05, k = 1,168 mm3) in a-tDCS + SMART compared to s-tDCS + SMART. No significant relationship was observed between the increased CBF with cognition. Irrespective of group, the combined MCI showed gains at T2 in EF of conceptual reasoning [DKEFS card sort (p = 0.033)] and category fluency [COWAT (p = 0.055)], along with gains at T3 in EF of verbal fluency [COWAT (p = 0.009)].ConclusionOne intriguing finding is a-tDCS to left IFG plus SMART increased blood flow to right MFC, however, the stimulation seemingly blocked cognitive benefits of SMART on EF (inhibition and innovation) and episodic memory compared to s-tDCS + SMART group. Although the sample size is small, this paper contributes to growing evidence that cognitive training provides a way to significantly enhance cognitive performance in adults showing memory loss, where the role of a-tDCS in augmenting these effects need further study

    Measuring self-regulation in everyday life: reliability and validity of smartphone-based experiments in alcohol use disorder

    Get PDF
    Self-regulation, the ability to guide behavior according to one’s goals, plays an integral role in understanding loss of control over unwanted behaviors, for example in alcohol use disorder (AUD). Yet, experimental tasks that measure processes underlying self-regulation are not easy to deploy in contexts where such behaviors usually occur, namely outside the laboratory, and in clinical populations such as people with AUD. Moreover, lab-based tasks have been criticized for poor test–retest reliability and lack of construct validity. Smartphones can be used to deploy tasks in the field, but often require shorter versions of tasks, which may further decrease reliability. Here, we show that combining smartphone-based tasks with joint hierarchical modeling of longitudinal data can overcome at least some of these shortcomings. We test four short smartphone-based tasks outside the laboratory in a large sample (N = 488) of participants with AUD. Although task measures indeed have low reliability when data are analyzed traditionally by modeling each session separately, joint modeling of longitudinal data increases reliability to good and oftentimes excellent levels. We next test the measures’ construct validity and show that extracted latent factors are indeed in line with theoretical accounts of cognitive control and decision-making. Finally, we demonstrate that a resulting cognitive control factor relates to a real-life measure of drinking behavior and yields stronger correlations than single measures based on traditional analyses. Our findings demonstrate how short, smartphone-based task measures, when analyzed with joint hierarchical modeling and latent factor analysis, can overcome frequently reported shortcomings of experimental tasks

    Measuring self-regulation in everyday life: Reliability and validity of smartphone-based experiments in alcohol use disorder

    Get PDF
    Self-regulation, the ability to guide behavior according to one's goals, plays an integral role in understanding loss of control over unwanted behaviors, for example in alcohol use disorder (AUD). Yet, experimental tasks that measure processes underlying self-regulation are not easy to deploy in contexts where such behaviors usually occur, namely outside the laboratory, and in clinical populations such as people with AUD. Moreover, lab-based tasks have been criticized for poor test-retest reliability and lack of construct validity. Smartphones can be used to deploy tasks in the field, but often require shorter versions of tasks, which may further decrease reliability. Here, we show that combining smartphone-based tasks with joint hierarchical modeling of longitudinal data can overcome at least some of these shortcomings. We test four short smartphone-based tasks outside the laboratory in a large sample (N = 488) of participants with AUD. Although task measures indeed have low reliability when data are analyzed traditionally by modeling each session separately, joint modeling of longitudinal data increases reliability to good and oftentimes excellent levels. We next test the measures' construct validity and show that extracted latent factors are indeed in line with theoretical accounts of cognitive control and decision-making. Finally, we demonstrate that a resulting cognitive control factor relates to a real-life measure of drinking behavior and yields stronger correlations than single measures based on traditional analyses. Our findings demonstrate how short, smartphone-based task measures, when analyzed with joint hierarchical modeling and latent factor analysis, can overcome frequently reported shortcomings of experimental tasks

    Patterns of Alcohol Consumption Among Individuals With Alcohol Use Disorder During the COVID-19 Pandemic and Lockdowns in Germany

    Get PDF
    Importance Alcohol consumption (AC) leads to death and disability worldwide. Ongoing discussions on potential negative effects of the COVID-19 pandemic on AC need to be informed by real-world evidence. Objective To examine whether lockdown measures are associated with AC and consumption-related temporal and psychological within-person mechanisms. Design, Setting, and Participants This quantitative, intensive, longitudinal cohort study recruited 1743 participants from 3 sites from February 20, 2020, to February 28, 2021. Data were provided before and within the second lockdown of the COVID-19 pandemic in Germany: before lockdown (October 2 to November 1, 2020); light lockdown (November 2 to December 15, 2020); and hard lockdown (December 16, 2020, to February 28, 2021). Main Outcomes and Measures Daily ratings of AC (main outcome) captured during 3 lockdown phases (main variable) and temporal (weekends and holidays) and psychological (social isolation and drinking intention) correlates. Results Of the 1743 screened participants, 189 (119 [63.0%] male; median [IQR] age, 37 [27.5-52.0] years) with at least 2 alcohol use disorder (AUD) criteria according to the Diagnostic and Statistical Manual of Mental Disorders (Fifth Edition) yet without the need for medically supervised alcohol withdrawal were included. These individuals provided 14 694 smartphone ratings from October 2020 through February 2021. Multilevel modeling revealed significantly higher AC (grams of alcohol per day) on weekend days vs weekdays (β = 11.39; 95% CI, 10.00-12.77; P < .001). Alcohol consumption was above the overall average on Christmas (β = 26.82; 95% CI, 21.87-31.77; P < .001) and New Year’s Eve (β = 66.88; 95% CI, 59.22-74.54; P < .001). During the hard lockdown, perceived social isolation was significantly higher (β = 0.12; 95% CI, 0.06-0.15; P < .001), but AC was significantly lower (β = −5.45; 95% CI, −8.00 to −2.90; P = .001). Independent of lockdown, intention to drink less alcohol was associated with lower AC (β = −11.10; 95% CI, −13.63 to −8.58; P < .001). Notably, differences in AC between weekend and weekdays decreased both during the hard lockdown (β = −6.14; 95% CI, −9.96 to −2.31; P = .002) and in participants with severe AUD (β = −6.26; 95% CI, −10.18 to −2.34; P = .002). Conclusions and Relevance This 5-month cohort study found no immediate negative associations of lockdown measures with overall AC. Rather, weekend-weekday and holiday AC patterns exceeded lockdown effects. Differences in AC between weekend days and weekdays evinced that weekend drinking cycles decreased as a function of AUD severity and lockdown measures, indicating a potential mechanism of losing and regaining control. This finding suggests that temporal patterns and drinking intention constitute promising targets for prevention and intervention, even in high-risk individuals

    Enhancing Innovation and Underlying Neural Mechanisms Via Cognitive Training in Healthy Older Adults

    No full text
    Non-invasive interventions, such as cognitive training (CT) and physical exercise, are gaining momentum as ways to augment both cognitive and brain function throughout life. One of the most fundamental yet little studied aspects of human cognition is innovative thinking, especially in older adults. In this study, we utilize a measure of innovative cognition that examines both the quantity and quality of abstracted interpretations. This randomized pilot trial in cognitively normal adults (56–75 years) compared the effect of cognitive reasoning training (SMART) on innovative cognition as measured by Multiple Interpretations Measure (MIM). We also examined brain changes in relation to MIM using two MRI-based measurement of arterial spin labeling (ASL) to measure cerebral blood flow (CBF) and functional connectivity MRI (fcMRI) to measure default mode and central executive network (CEN) synchrony at rest. Participants (N = 58) were randomized to the CT, physical exercise (physical training, PT) or control (CN) group where CT and PT groups received training for 3 h/week over 12 weeks. They were assessed at baseline-, mid- and post-training using innovative cognition and MRI measures. First, the CT group showed significant gains pre- to post-training on the innovation measure whereas the physical exercise and control groups failed to show significant gains. Next, the CT group showed increased CBF in medial orbitofrontal cortex (mOFC) and bilateral posterior cingulate cortex (PCC), two nodes within the Default Mode Network (DMN) compared to physical exercise and control groups. Last, significant correlations were found between innovation performance and connectivity of two major networks: CEN (positive correlation) and DMN (negative correlation). These results support the view that both the CEN and DMN are important for enhancement of innovative cognition. We propose that neural mechanisms in healthy older adults can be modified through reasoning training to better subserve enhanced innovative cognition

    Enhancing Executive Function and Neural Health in Bipolar Disorder Through Reasoning Training

    Get PDF
    Cognitive deficits in executive function and memory among individuals with bipolar disorder (BD) are well documented; however, only recently have efforts begun to address whether such cognitive deficits can be ameliorated through cognitive training. This pilot study examined the effects of a top-down, cognitive reasoning training program in adults with BD on both brain and cognitive measures. Twenty-seven participants (11 male, 16 female), aged 21 to 70 years old, completed the study. Participants completed neurocognitive testing and functional magnetic resonance imaging before and after training, consisting of eight hours (2 hours/week) of training in small groups. The training delivered information processing strategies that were implemented and applicable to a variety of daily living contexts. Results indicated that participants showed significant gains in the primary outcome measure of complex abstraction, also referred to as gist reasoning, as well as in untrained domains of executive function and memory. We found a significant increase in resting cerebral blood flow (CBF) in left inferior frontal gyrus after cognitive training. We also found that resting CBF in the right frontal middle gyrus correlated positively with performance on the measure of complex abstraction. This feasibility study provides promising evidence that short-term reasoning training can enhance cognitive performance and brain health in adults with BD. These data motivate further efforts to explore adjuvant therapeutics to improve cognitive performance and underlying brain systems in bipolar, as well as other psychiatric disorders.Clinicaltrials.gov Identifier: NCT0284328

    Long-term effects of marijuana use on the brain.

    No full text
    The existing literature on the long-term effects of marijuana on the brain provides an inconsistent picture (i.e., presence or absence of structural changes) due to methodological differences across studies. We overcame these methodological issues by collecting multimodal measures in a large group of chronic marijuana using adults with a wide age range that allows for characterization of changes across lifespan without developmental or maturational biases as in other studies. Our findings suggest that chronic marijuana use is associated with complex neuroadaptive processes and that onset and duration of use have unique effects on these processes

    Modular Brain Network Organization Predicts Response to Cognitive Training in Older Adults.

    No full text
    Cognitive training interventions are a promising approach to mitigate cognitive deficits common in aging and, ultimately, to improve functioning in older adults. Baseline neural factors, such as properties of brain networks, may predict training outcomes and can be used to improve the effectiveness of interventions. Here, we investigated the relationship between baseline brain network modularity, a measure of the segregation of brain sub-networks, and training-related gains in cognition in older adults. We found that older adults with more segregated brain sub-networks (i.e., more modular networks) at baseline exhibited greater training improvements in the ability to synthesize complex information. Further, the relationship between modularity and training-related gains was more pronounced in sub-networks mediating "associative" functions compared with those involved in sensory-motor processing. These results suggest that assessments of brain networks can be used as a biomarker to guide the implementation of cognitive interventions and improve outcomes across individuals. More broadly, these findings also suggest that properties of brain networks may capture individual differences in learning and neuroplasticity. Trail Registration: ClinicalTrials.gov, NCT#00977418
    corecore