60 research outputs found

    Cognitive Change as a Predictor of Session-to-Session Symptom Change in Cognitive Therapy for Depression

    Get PDF
    The Ohio State University Denman Research Forum, Award WinnerThe Three Minute Undergraduate Thesis Competition, AwardeeStudies of cognitive therapy for depression (CT) suggest that decreases in negative cognitions coincide with reductions in depressive symptoms over the course of treatment. Although these results are consistent with the theory that cognitive change is responsible for therapeutic gains, the timing of such assessments has precluded establishing cognitive change as a predictor of subsequent symptom reduction. To test cognitive change as a predictor of symptom change, we examined patient-reported cognitive change observed during (immediate cognitive change; CC-I) and between (delayed cognitive change; CC-D) therapy sessions as predictors of symptom reduction across sessions 1 through 5 in CT. Additionally, we explored if these potential predictive relations vary according to patients’ pretreatment maladaptive personality traits and interpersonal problems and functioning. To further understand the function of cognitive change in CT, we also assessed CC-I as a predictor of session-to-session CC-D across these sessions of interest. A total of 126 adults with major depressive disorder participated in 16 weeks of CT. CC-I was evaluated immediately after each session, and CC-D and depressive symptoms were assessed before each session. To rule out stable patient characteristics as potential confounds, we disaggregated the within- and between-patient effects of cognitive change scores and focused on the within-patient effects as predictors. Within-patient CC-I significantly predicted subsequent CC-D, and within-patient CC-D significantly predicted subsequent symptom change. Within-patient CC-I did not significantly predict session-to-session symptom change. Interestingly, the relation of within-patient CC-I and symptom change was significantly moderated by patient maladaptive personality traits and interpersonal problems, whereas interpersonal functioning significantly moderated the relation of within-patient CC-D and symptom change. These results suggest that cognitive changes observed during therapy sessions predict additional cognitive change between sessions, which ultimately produce subsequent depressive symptom reduction.Undergraduate Summer Research FellowshipThe Ohio State University Arts & Sciences Honors Research ScholarshipBillingslea Research AwardPiyu C. Ko Memorial ScholarshipResearch Scholar AwardA five-year embargo was granted for this item.Academic Major: Psycholog

    Association of leukocyte telomere length with mortality among adult participants in 3 longitudinal studies

    Get PDF
    Importance: Leukocyte telomere length (LTL) is a trait associated with risk of cardiovascular disease and cancer, the 2 major disease categories that largely define longevity in the United States. However, it remains unclear whether LTL is associated with the human life span. Objective: To examine whether LTL is associated with the life span of contemporary humans. Design, Setting, and Participants: This cohort study included 3259 adults of European ancestry from the Cardiovascular Health Study (CHS), Framingham Heart Study (FHS), and Women's Health Initiative (WHI). Leukocyte telomere length was measured in 1992 and 1997 in the CHS, from 1995 to 1998 in the FHS, and from 1993 to 1998 in the WHI. Data analysis was conducted from February 2017 to December 2019. Main Outcomes and Measures: Death and LTL, measured by Southern blots of the terminal restriction fragments, were the main outcomes. Cause of death was adjudicated by end point committees. Results: The analyzed sample included 3259 participants (2342 [71.9%] women), with a median (range) age of 69.0 (50.0-98.0) years at blood collection. The median (range) follow-up until death was 10.9 (0.2-23.0) years in CHS, 19.7 (3.4-23.0) years in FHS, and 16.6 (0.5-20.0) years in WHI. During follow-up, there were 1525 deaths (482 [31.6%] of cardiovascular disease; 373 [24.5%] of cancer, and 670 [43.9%] of other or unknown causes). Short LTL, expressed in residual LTL, was associated with increased mortality risk. Overall, the hazard ratio for all-cause mortality for a 1-kilobase decrease in LTL was 1.34 (95% CI, 1.21-1.47). This association was stronger for noncancer causes of death (cardiovascular death: hazard ratio, 1.28; 95% CI, 1.08-1.52; cancer: hazard ratio, 1.13; 95% CI, 0.93-1.36; and other causes: hazard ratio, 1.53; 95% CI, 1.32-1.77). Conclusions and Relevance: The results of this study indicate that LTL is associated with a natural life span limit in contemporary humans

    Standardization of in vitro digestibility and DIAAS method based on the static INFOGEST protocol

    Get PDF
    Background: The FAO recommends the digestible indispensable amino acid score (DIAAS) as the measure for protein quality, for which the true ileal digestibility needs to be assessed in humans or pigs. However, due to high costs and ethical concerns, the FAO strongly encourages as well the development of validated in vitro methods, which complement the in vivo experiments. Method: Recently, an in vitro workflow, based on the validated static INFOGEST protocol, was developed and compared towards in vivo data. In parallel to the validation with in vivo data, the repeatability and reproducibility of the in vitro protocol were tested in an international ring trial (RT) with the aim to establish an international ISO standard method within the International Dairy Federation (IDF). Five different dairy products (skim milk powder, whole milk powder, whey protein isolate, yoghurt, and cheese) were analyzed in 32 different laboratories from 18 different countries, across 4 continents. Results: in vitro protein digestibilities based on Nitrogen, free R-NH2, and total amino acids as well as DIAAS values were calculated and compared to in vivo data, where available. Conclusion: The in vitro method is suited for quantification of digestibility and will be further implemented to other food matricesinfo:eu-repo/semantics/publishedVersio

    Plan der GrÀnzen des Burgfriedens von Regensburg

    Get PDF
    Digital media availability has surged over the past decade. Because of a lack of comprehensive measurement tools, this rapid growth in access to digital media is accompanied by a scarcity of research examining the family media context and sociocognitive outcomes. There is also little cross-cultural research in families with young children. Modern media are mobile, interactive, and often short in duration, making them difficult to remember when caregivers respond to surveys about media use. The Comprehensive Assessment of Family Media Exposure (CAFE) Consortium has developed a novel tool to measure household media use through a web-based questionnaire, time-use diary, and passive-sensing app installed on family mobile devices. The goal of developing a comprehensive assessment of family media exposure was to take into account the contextual factors of media use and improve upon the limitations of existing self-report measures, while creating a consistent, scalable, and cost-effective tool. The CAFE tool captures the content and context of early media exposure and addresses the limitations of prior media measurement approaches. Preliminary data collected using this measure have been integrated into a shared visualization platform. In this perspective article, we take a tools-of-the-trade approach (Oakes, 2010) to describe four challenges associated with measuring household media exposure in families with young children: measuring attitudes and practices; capturing content and context; measuring short bursts of mobile device usage; and integrating data to capture the complexity of household media usage. We illustrate how each of these challenges can be addressed with preliminary data collected with the CAFE tool and visualized on our dashboard. We conclude with future directions including plans to test reliability, validity, and generalizability of these measures

    A multimodal cell census and atlas of the mammalian primary motor cortex

    Get PDF
    ABSTRACT We report the generation of a multimodal cell census and atlas of the mammalian primary motor cortex (MOp or M1) as the initial product of the BRAIN Initiative Cell Census Network (BICCN). This was achieved by coordinated large-scale analyses of single-cell transcriptomes, chromatin accessibility, DNA methylomes, spatially resolved single-cell transcriptomes, morphological and electrophysiological properties, and cellular resolution input-output mapping, integrated through cross-modal computational analysis. Together, our results advance the collective knowledge and understanding of brain cell type organization: First, our study reveals a unified molecular genetic landscape of cortical cell types that congruently integrates their transcriptome, open chromatin and DNA methylation maps. Second, cross-species analysis achieves a unified taxonomy of transcriptomic types and their hierarchical organization that are conserved from mouse to marmoset and human. Third, cross-modal analysis provides compelling evidence for the epigenomic, transcriptomic, and gene regulatory basis of neuronal phenotypes such as their physiological and anatomical properties, demonstrating the biological validity and genomic underpinning of neuron types and subtypes. Fourth, in situ single-cell transcriptomics provides a spatially-resolved cell type atlas of the motor cortex. Fifth, integrated transcriptomic, epigenomic and anatomical analyses reveal the correspondence between neural circuits and transcriptomic cell types. We further present an extensive genetic toolset for targeting and fate mapping glutamatergic projection neuron types toward linking their developmental trajectory to their circuit function. Together, our results establish a unified and mechanistic framework of neuronal cell type organization that integrates multi-layered molecular genetic and spatial information with multi-faceted phenotypic properties

    Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19

    Get PDF
    IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19. Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19. DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022). INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days. MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes. RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively). CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570
    • 

    corecore