98 research outputs found

    Chronic Maternal Depression Is Associated with Reduced Weight Gain in Latino Infants from Birth to 2 Years of Age

    Get PDF
    BACKGROUND: Latino children are at increased risk for mirconutrient deficiencies and problems of overweight and obesity. Exposures in pregnancy and early postpartum may impact future growth trajectories. OBJECTIVES: To evaluate the relationship between prenatal and postnatal maternal depressive symptoms experienced in pregnancy and infant growth from birth to 2 years of age in a cohort of Latino infants. METHODS: We recruited pregnant Latina mothers at two San Francisco hospitals and followed their healthy infants to 24 months of age. At 6, 12 and 24 months of age, infants were weighed and measured. Maternal depressive symptoms were assessed prenatally and at 4-6 weeks postpartum. Women who had high depressive symptoms at both time periods were defined as having chronic depression. Logistic mixed models were applied to compare growth curves and risk for overweight and underweight based on exposure to maternal depression. RESULTS: We followed 181 infants to 24 months. At 12 and 24 months, respectively, 27.4% and 40.5% were overweight, and 5.6% and 2.2% were underweight. Exposure to chronic maternal depression was associated with underweight (OR = 12.12, 95%CI 1.86-78.78) and with reduced weight gain in the first 2 years of life (Coef = -0.48, 95% CI -0.94-0.01) compared with unexposed infants or infants exposed to episodic depression (depression at one time point). Exposure to chronic depression was also associated with reduced risk for overweight in the first 2 years of life (OR 0.28, 95%CI 0.03-0.92). CONCLUSIONS: Exposure to chronic maternal depression in the pre- and postnatal period was associated with reduced weight gain in the first two years of life and greater risk for failure to thrive, in comparison with unexposed infants or those exposed episodically. The infants of mothers with chronic depression may need additional nutritional monitoring and intervention

    Standardization of cytokine flow cytometry assays

    Get PDF
    BACKGROUND: Cytokine flow cytometry (CFC) or intracellular cytokine staining (ICS) can quantitate antigen-specific T cell responses in settings such as experimental vaccination. Standardization of ICS among laboratories performing vaccine studies would provide a common platform by which to compare the immunogenicity of different vaccine candidates across multiple international organizations conducting clinical trials. As such, a study was carried out among several laboratories involved in HIV clinical trials, to define the inter-lab precision of ICS using various sample types, and using a common protocol for each experiment (see additional files online). RESULTS: Three sample types (activated, fixed, and frozen whole blood; fresh whole blood; and cryopreserved PBMC) were shipped to various sites, where ICS assays using cytomegalovirus (CMV) pp65 peptide mix or control antigens were performed in parallel in 96-well plates. For one experiment, antigens and antibody cocktails were lyophilised into 96-well plates to simplify and standardize the assay setup. Results (CD4(+)cytokine(+ )cells and CD8(+)cytokine(+ )cells) were determined by each site. Raw data were also sent to a central site for batch analysis with a dynamic gating template. Mean inter-laboratory coefficient of variation (C.V.) ranged from 17–44% depending upon the sample type and analysis method. Cryopreserved peripheral blood mononuclear cells (PBMC) yielded lower inter-lab C.V.'s than whole blood. Centralized analysis (using a dynamic gating template) reduced the inter-lab C.V. by 5–20%, depending upon the experiment. The inter-lab C.V. was lowest (18–24%) for samples with a mean of >0.5% IFNγ + T cells, and highest (57–82%) for samples with a mean of <0.1% IFNγ + cells. CONCLUSION: ICS assays can be performed by multiple laboratories using a common protocol with good inter-laboratory precision, which improves as the frequency of responding cells increases. Cryopreserved PBMC may yield slightly more consistent results than shipped whole blood. Analysis, particularly gating, is a significant source of variability, and can be reduced by centralized analysis and/or use of a standardized dynamic gating template. Use of pre-aliquoted lyophilized reagents for stimulation and staining can provide further standardization to these assays

    Mortality and pulmonary complications in patients undergoing surgery with perioperative SARS-CoV-2 infection: an international cohort study

    Get PDF
    Background: The impact of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) on postoperative recovery needs to be understood to inform clinical decision making during and after the COVID-19 pandemic. This study reports 30-day mortality and pulmonary complication rates in patients with perioperative SARS-CoV-2 infection. Methods: This international, multicentre, cohort study at 235 hospitals in 24 countries included all patients undergoing surgery who had SARS-CoV-2 infection confirmed within 7 days before or 30 days after surgery. The primary outcome measure was 30-day postoperative mortality and was assessed in all enrolled patients. The main secondary outcome measure was pulmonary complications, defined as pneumonia, acute respiratory distress syndrome, or unexpected postoperative ventilation. Findings: This analysis includes 1128 patients who had surgery between Jan 1 and March 31, 2020, of whom 835 (74·0%) had emergency surgery and 280 (24·8%) had elective surgery. SARS-CoV-2 infection was confirmed preoperatively in 294 (26·1%) patients. 30-day mortality was 23·8% (268 of 1128). Pulmonary complications occurred in 577 (51·2%) of 1128 patients; 30-day mortality in these patients was 38·0% (219 of 577), accounting for 81·7% (219 of 268) of all deaths. In adjusted analyses, 30-day mortality was associated with male sex (odds ratio 1·75 [95% CI 1·28–2·40], p\textless0·0001), age 70 years or older versus younger than 70 years (2·30 [1·65–3·22], p\textless0·0001), American Society of Anesthesiologists grades 3–5 versus grades 1–2 (2·35 [1·57–3·53], p\textless0·0001), malignant versus benign or obstetric diagnosis (1·55 [1·01–2·39], p=0·046), emergency versus elective surgery (1·67 [1·06–2·63], p=0·026), and major versus minor surgery (1·52 [1·01–2·31], p=0·047). Interpretation: Postoperative pulmonary complications occur in half of patients with perioperative SARS-CoV-2 infection and are associated with high mortality. Thresholds for surgery during the COVID-19 pandemic should be higher than during normal practice, particularly in men aged 70 years and older. Consideration should be given for postponing non-urgent procedures and promoting non-operative treatment to delay or avoid the need for surgery. Funding: National Institute for Health Research (NIHR), Association of Coloproctology of Great Britain and Ireland, Bowel and Cancer Research, Bowel Disease Research Foundation, Association of Upper Gastrointestinal Surgeons, British Association of Surgical Oncology, British Gynaecological Cancer Society, European Society of Coloproctology, NIHR Academy, Sarcoma UK, Vascular Society for Great Britain and Ireland, and Yorkshire Cancer Research

    Retrospective evaluation of whole exome and genome mutation calls in 746 cancer samples

    No full text
    Funder: NCI U24CA211006Abstract: The Cancer Genome Atlas (TCGA) and International Cancer Genome Consortium (ICGC) curated consensus somatic mutation calls using whole exome sequencing (WES) and whole genome sequencing (WGS), respectively. Here, as part of the ICGC/TCGA Pan-Cancer Analysis of Whole Genomes (PCAWG) Consortium, which aggregated whole genome sequencing data from 2,658 cancers across 38 tumour types, we compare WES and WGS side-by-side from 746 TCGA samples, finding that ~80% of mutations overlap in covered exonic regions. We estimate that low variant allele fraction (VAF < 15%) and clonal heterogeneity contribute up to 68% of private WGS mutations and 71% of private WES mutations. We observe that ~30% of private WGS mutations trace to mutations identified by a single variant caller in WES consensus efforts. WGS captures both ~50% more variation in exonic regions and un-observed mutations in loci with variable GC-content. Together, our analysis highlights technological divergences between two reproducible somatic variant detection efforts

    Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19

    Get PDF
    IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19. Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19. DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022). INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days. MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes. RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively). CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570

    Using a patient-centered approach to benefit-harm assessment in treatment decision-making: a case study in uveitis

    Full text link
    BACKGROUND: Synthesizing evidence from comparative effectiveness trials can be difficult because multiple outcomes of different importance are to be considered. The goal of this study was to demonstrate an approach to conducting quantitative benefit-harm assessment that considers patient preferences. METHODS: We conducted a benefit-harm assessment using data from the Multicenter Uveitis Steroid Treatment Trial that compared corticosteroid implant versus systemic corticosteroids and immunosuppression in non-infectious intermediate, posterior, and panuveitis. We focused on clinical outcomes considered important to patients, including visual acuity, development of cataracts/glaucoma, need for eye surgery, prescription-requiring hypertension, hyperlipidemia, and infections. Patient preferences elicited in a recent survey were then incorporated into our assessment of the benefit-harm balance. RESULTS: Benefit-harm metrics were calculated for each time point that summarized the numbers of outcomes, caused or prevented by implant therapy versus systemic therapy if 1000 patients were treated. The benefit-harm metric was -129 (95% confidence interval: -242 to -14), -317 (-436 to -196), -390 (-514 to -264), and -526 (-687 to -368) at 6, 12, 18, and 24 months follow up, respectively, suggesting that systemic therapy may have a better benefit-harm balance. However, measures of quality of life for patients treated with implant therapy were found to be better than patients treated with systemic therapy over the same time period. CONCLUSIONS: Results of benefit-harm assessment were different from the prospectively collected quality of life data during trial follow up. Future studies should explore the reasons for such discrepancies and the strength and weakness of each method to assess treatment benefits and harms

    Dissociations of the Fluocinolone Acetonide Implant: The Multicenter Uveitis Steroid Treatment (MUST) Trial and Follow-up Study

    No full text
    To describe fluocinolone acetonide implant dissociations in the Multicenter Uveitis Steroid Treatment (MUST) Trial. Randomized clinical trial with extended follow-up. Review of data collected on the first implant in the eye(s) of participants. Dissociation was defined as the drug pellet no longer being affixed to the strut and categorized as spontaneous or surgically related. A total of 250 eyes (146 patients) had at least 1 implant placed. Median follow-up time after implant placement was 6 years (range 0.5-9.2 years). Thirty-four dissociations were reported in 30 participants. There were 22 spontaneous events in 22 participants; 6-year cumulative risk of a spontaneous dissociation was 4.8% (95% confidence interval [CI]: 2.4%-9.1%). The earliest event occurred 4.8 years after placement. Nine of 22 eyes with data had a decline in visual acuity ≥5 letters temporally related to the dissociation. Thirty-nine implant removal surgeries were performed, 33 with replacement. Twelve dissociations were noted during implant removal surgeries in 10 participants (26%, 95% CI 15%-48%); 5 of these eyes had a decline in visual acuity ≥5 letters after surgery. The time from implant placement to removal surgery was longer for the surgeries at which dissociated implants were identified than for those without one (5.7 vs 3.7 years, P < .001). Overall, visual acuity declined 15 or more letters from pre-implant values in 22% of affected eyes; declines were frequently associated with complications of uveitis or its treatment. There is an increasing risk of dissociation of Retisert implants during follow-up; the risk is greater with removal/exchange surgeries, but the risk of both spontaneous and surgically related events increases with longevity of the implants. In 22% of affected eyes visual acuity declined by 15 letters. In the context of eyes with moderate to severe uveitis for years, this rate is not unexpected
    corecore