108 research outputs found

    Proinflammatory Diets during Pregnancy and Neonatal Adiposity in the Healthy Start Study

    Get PDF
    Objective: To evaluate the association between dietary inflammatory index (DII) scores during pregnancy and neonatal adiposity. Study design: The analysis included 1078 mother–neonate pairs in Healthy Start, a prospective prebirth cohort. Diet was assessed using repeated 24-hour dietary recalls. DII scores were obtained by summing nutrient intakes, which were standardized to global means and multiplied by inflammatory effect scores. Air displacement plethysmography measured fat mass and fat-free mass within 72 hours of birth. Linear and logistic models evaluated the associations of DII scores with birth weight, fat mass, fat-free mass, and percent fat mass, and with categorical outcomes of small- and large-for-gestational age. We tested for interactions with prepregnancy BMI and gestational weight gain. Results: The interaction between prepregnancy BMI and DII was statistically significant for birth weight, neonatal fat mass, and neonatal percent fat mass. Among neonates born to obese women, each 1-unit increase in DII was associated with increased birth weight (53 g; 95% CI, 20, 87), fat mass (20 g; 95% CI, 7-33), and percent fat mass (0.5%; 95% CI, 0.2-0.8). No interaction was detected for small- and large-for-gestational age. Each 1-unit increase in DII score was associated a 40% increase in odds of a large-for-gestational age neonate (1.4; 95% CI, 1.0-2.0; P =.04), but not a small-for-gestational age neonate (1.0; 95% CI, 0.8-1.2; P =.80). There was no evidence of an interaction with gestational weight gain. Conclusions: Our findings support the hypothesis that an increased inflammatory milieu during pregnancy may be a risk factor for neonatal adiposity. Trial registration: Clinicaltrials.gov: NCT02273297

    Maternal Diet Quality During Pregnancy and Offspring Hepatic Fat in Early Childhood: The Healthy Start Study

    Get PDF
    Background: Overnutrition in utero may increase offspring risk of nonalcoholic fatty liver disease (NAFLD), but the specific contribution of maternal diet quality during pregnancy to this association remains understudied in humans. Objectives: This study aimed to examine the associations of maternal diet quality during pregnancy with offspring hepatic fat in early childhood (median: 5 y old, range: 4–8 y old). Methods: Data were from 278 mother–child pairs in the longitudinal, Colorado-based Healthy Start Study. Multiple 24-h recalls were collected from mothers during pregnancy on a monthly basis (median: 3 recalls, range: 1–8 recalls starting after enrollment), and used to estimate maternal usual nutrient intakes and dietary pattern scores [Healthy Eating Index-2010 (HEI-2010), Dietary Inflammatory Index (DII), and Relative Mediterranean Diet Score (rMED)]. Offspring hepatic fat was measured in early childhood by MRI. Associations of maternal dietary predictors during pregnancy with offspring log-transformed hepatic fat were assessed using linear regression models adjusted for offspring demographics, maternal/perinatal confounders, and maternal total energy intake. Results: Higher maternal fiber intake and rMED scores during pregnancy were associated with lower offspring hepatic fat in early childhood in fully adjusted models [Back-transformed ÎČ (95% CI): 0.82 (0.72, 0.94) per 5 g/1000 kcal fiber; 0.93 (0.88, 0.99) per 1 SD for rMED]. In contrast, higher maternal total sugar and added sugar intakes, and DII scores were associated with higher offspring hepatic fat [Back-transformed ÎČ (95% CI): 1.18 (1.05, 1.32) per 5% kcal/d added sugar; 1.08 (0.99, 1.18) per 1 SD for DII]. Analyses of dietary pattern subcomponents also revealed that lower maternal intakes of green vegetables and legumes and higher intake of “empty calories” were associated with higher offspring hepatic fat in early childhood. Conclusions: Poorer maternal diet quality during pregnancy was associated with greater offspring susceptibility to hepatic fat in early childhood. Our findings provide insights into potential perinatal targets for the primordial prevention of pediatric NAFLD

    Processing GOTO data with the Rubin Observatory LSST Science Pipelines I: Production of coadded frames

    Get PDF
    The past few decades have seen the burgeoning of wide field, high cadence surveys, the most formidable of which will be the Legacy Survey of Space and Time (LSST) to be conducted by the Vera C. Rubin Observatory. So new is the field of systematic time-domain survey astronomy, however, that major scientific insights will continue to be obtained using smaller, more flexible systems than the LSST. One such example is the Gravitational-wave Optical Transient Observer (GOTO), whose primary science objective is the optical follow-up of Gravitational Wave events. The amount and rate of data production by GOTO and other wide-area, high-cadence surveys presents a significant challenge to data processing pipelines which need to operate in near real-time to fully exploit the time-domain. In this study, we adapt the Rubin Observatory LSST Science Pipelines to process GOTO data, thereby exploring the feasibility of using this "off-the-shelf" pipeline to process data from other wide-area, high-cadence surveys. In this paper, we describe how we use the LSST Science Pipelines to process raw GOTO frames to ultimately produce calibrated coadded images and photometric source catalogues. After comparing the measured astrometry and photometry to those of matched sources from PanSTARRS DR1, we find that measured source positions are typically accurate to sub-pixel levels, and that measured L-band photometries are accurate to ∌50 mmag at mL∌16 and ∌200 mmag at mL∌18. These values compare favourably to those obtained using GOTO's primary, in-house pipeline, GOTOPHOTO, in spite of both pipelines having undergone further development and improvement beyond the implementations used in this study. Finally, we release a generic "obs package" that others can build-upon should they wish to use the LSST Science Pipelines to process data from other facilities

    Light curve classification with recurrent neural networks for GOTO: dealing with imbalanced data

    Get PDF
    The advent of wide-field sky surveys has led to the growth of transient and variable source discoveries. The data deluge produced by these surveys has necessitated the use of machine learning (ML) and deep learning (DL) algorithms to sift through the vast incoming data stream. A problem that arises in real-world applications of learning algorithms for classification is imbalanced data, where a class of objects within the data is underrepresented, leading to a bias for over-represented classes in the ML and DL classifiers. We present a recurrent neural network (RNN) classifier that takes in photometric time-series data and additional contextual information (such as distance to nearby galaxies and on-sky position) to produce real-time classification of objects observed by the Gravitational-wave Optical Transient Observer (GOTO), and use an algorithm-level approach for handling imbalance with a focal loss function. The classifier is able to achieve an Area Under the Curve (AUC) score of 0.972 when using all available photometric observations to classify variable stars, supernovae, and active galactic nuclei. The RNN architecture allows us to classify incomplete light curves, and measure how performance improves as more observations are included. We also investigate the role that contextual information plays in producing reliable object classification

    Measurement of the azimuthal anisotropy of Y(1S) and Y(2S) mesons in PbPb collisions at √S^{S}NN = 5.02 TeV

    Get PDF
    The second-order Fourier coefficients (υ2_{2}) characterizing the azimuthal distributions of ΄(1S) and ΄(2S) mesons produced in PbPb collisions at sNN\sqrt{s_{NN}} = 5.02 TeV are studied. The ΄mesons are reconstructed in their dimuon decay channel, as measured by the CMS detector. The collected data set corresponds to an integrated luminosity of 1.7 nb−1^{-1}. The scalar product method is used to extract the υ2_{2} coefficients of the azimuthal distributions. Results are reported for the rapidity range |y| < 2.4, in the transverse momentum interval 0 < pT_{T} < 50 GeV/c, and in three centrality ranges of 10–30%, 30–50% and 50–90%. In contrast to the J/ψ mesons, the measured υ2_{2} values for the ΄ mesons are found to be consistent with zero

    Performance of reconstruction and identification of τ leptons decaying to hadrons and vτ in pp collisions at √s=13 TeV

    Get PDF
    The algorithm developed by the CMS Collaboration to reconstruct and identify τ leptons produced in proton-proton collisions at √s=7 and 8 TeV, via their decays to hadrons and a neutrino, has been significantly improved. The changes include a revised reconstruction of π⁰ candidates, and improvements in multivariate discriminants to separate τ leptons from jets and electrons. The algorithm is extended to reconstruct τ leptons in highly Lorentz-boosted pair production, and in the high-level trigger. The performance of the algorithm is studied using proton-proton collisions recorded during 2016 at √s=13 TeV, corresponding to an integrated luminosity of 35.9 fbÂŻÂč. The performance is evaluated in terms of the efficiency for a genuine τ lepton to pass the identification criteria and of the probabilities for jets, electrons, and muons to be misidentified as τ leptons. The results are found to be very close to those expected from Monte Carlo simulation

    Performance of the CMS Level-1 trigger in proton-proton collisions at √s = 13 TeV

    Get PDF
    At the start of Run 2 in 2015, the LHC delivered proton-proton collisions at a center-of-mass energy of 13\TeV. During Run 2 (years 2015–2018) the LHC eventually reached a luminosity of 2.1× 1034^{34} cm−2^{-2}s−1^{-1}, almost three times that reached during Run 1 (2009–2013) and a factor of two larger than the LHC design value, leading to events with up to a mean of about 50 simultaneous inelastic proton-proton collisions per bunch crossing (pileup). The CMS Level-1 trigger was upgraded prior to 2016 to improve the selection of physics events in the challenging conditions posed by the second run of the LHC. This paper describes the performance of the CMS Level-1 trigger upgrade during the data taking period of 2016–2018. The upgraded trigger implements pattern recognition and boosted decision tree regression techniques for muon reconstruction, includes pileup subtraction for jets and energy sums, and incorporates pileup-dependent isolation requirements for electrons and tau leptons. In addition, the new trigger calculates high-level quantities such as the invariant mass of pairs of reconstructed particles. The upgrade reduces the trigger rate from background processes and improves the trigger efficiency for a wide variety of physics signals

    Measurement of prompt D0^{0} and D‟\overline{D}0^{0} meson azimuthal anisotropy and search for strong electric fields in PbPb collisions at root SNN\sqrt{S_{NN}} = 5.02 TeV

    Get PDF
    The strong Coulomb field created in ultrarelativistic heavy ion collisions is expected to produce a rapiditydependent difference (Av2) in the second Fourier coefficient of the azimuthal distribution (elliptic flow, v2) between D0 (uc) and D0 (uc) mesons. Motivated by the search for evidence of this field, the CMS detector at the LHC is used to perform the first measurement of Av2. The rapidity-averaged value is found to be (Av2) = 0.001 ? 0.001 (stat)? 0.003 (syst) in PbPb collisions at ?sNN = 5.02 TeV. In addition, the influence of the collision geometry is explored by measuring the D0 and D0mesons v2 and triangular flow coefficient (v3) as functions of rapidity, transverse momentum (pT), and event centrality (a measure of the overlap of the two Pb nuclei). A clear centrality dependence of prompt D0 meson v2 values is observed, while the v3 is largely independent of centrality. These trends are consistent with expectations of flow driven by the initial-state geometry. ? 2021 The Author. Published by Elsevier B.V. This is an open access article under the CC BY licens

    An embedding technique to determine ττ backgrounds in proton-proton collision data

    Get PDF

    Pileup mitigation at CMS in 13 TeV data

    Get PDF
    With increasing instantaneous luminosity at the LHC come additional reconstruction challenges. At high luminosity, many collisions occur simultaneously within one proton-proton bunch crossing. The isolation of an interesting collision from the additional "pileup" collisions is needed for effective physics performance. In the CMS Collaboration, several techniques capable of mitigating the impact of these pileup collisions have been developed. Such methods include charged-hadron subtraction, pileup jet identification, isospin-based neutral particle "ÎŽÎČ" correction, and, most recently, pileup per particle identification. This paper surveys the performance of these techniques for jet and missing transverse momentum reconstruction, as well as muon isolation. The analysis makes use of data corresponding to 35.9 fb−1^{-1} collected with the CMS experiment in 2016 at a center-of-mass energy of 13 TeV. The performance of each algorithm is discussed for up to 70 simultaneous collisions per bunch crossing. Significant improvements are found in the identification of pileup jets, the jet energy, mass, and angular resolution, missing transverse momentum resolution, and muon isolation when using pileup per particle identification
    • 

    corecore