2,969 research outputs found

    Di-(2-ethylhexyl) phthalate metabolites in urine show age-related changes and associations with adiposity and parameters of insulin sensitivity in childhood

    Get PDF
    Objectives: Phthalates might be implicated with obesity and insulin sensitivity. We evaluated the levels of primary and secondary metabolites of Di-(2-ethylhexyl) phthalate (DEHP) in urine in obese and normal-weight subjects both before and during puberty, and investigated their relationships with auxological parameters and indexes of insulin sensitivity. Design and Methods: DEHP metabolites (MEHP, 6-OH-MEHP, 5-oxo-MEHP, 5-OH-MEHP, and 5-CX-MEHP), were measured in urine by RP-HPLC-ESI-MS. Traditional statistical analysis and a data mining analysis using the Auto-CM analysis were able to offer an insight into the complex biological connections between the studied variables. Results: The data showed changes in DEHP metabolites in urine related with obesity, puberty, and presence of insulin resistance. Changes in urine metabolites were related with age, height and weight, waist circumference and waist to height ratio, thus to fat distribution. In addition, clear relationships in both obese and normal-weight subjects were detected among MEHP, its products of oxidation and measurements of insulin sensitivity. Conclusion: It remains to be elucidated whether exposure to phthalates per se is actually the risk factor or if the ability of the body to metabolize phthalates is actually the key point. Further studies that span from conception to elderly subjects besides further understanding of DEHP metabolism are warranted to clarify these aspects

    Measurement of the kinematic variables of beauty particles produced in 350 GeV/c π\pi^--Cu interactions

    Get PDF
    Using a sample of 2626 b\=b events, produced in 350GeV/c350\,\hbox{GeV}/c π\pi^- interactions in a copper target, which includes 1313 events where the decays of both BB and B\overline{B} are well reconstructed, we measure the differential distributions with respect to xFx_F and pT2p_T^2 as well as some two-particle kinematic variables. We also compare our results with a previous experiment and with predictions based on perturbative QCD

    The ALICE trigger electronics

    Get PDF
    The ALICE trigger system (TRG) consists of a Central Trigger Processor (CTP) and up to 24 Local Trigger Units (LTU) for each sub-detector. The CTP receives and processes trigger signals from trigger detectors and the outputs from the CTP are 3 levels of hardware triggers: L0, L1 and L2. The 24 sub-detectors are dynamically partitioned in up to 6 independent clusters. The trigger information is propagated through the LTUs to the Front-end electronics (FEE) of each sub-detector via LVDS cables and optical fibres. The trigger information sent from LTU to FEE can be monitored online for possible errors using the newly developed TTCit board. After testing and commissioning of the trigger system itself on the surface, the ALICE trigger electronics has been installed and tested in the experimental cavern with appropriate ALICE experimental software. Testing the Alice trigger system with detectors on the surface and in the experimental cavern in parallel is progressing very well. Currently one setup is used for testing on the surface; another is installed in experimental cavern. This paper describes the current status of ALICE trigger electronics, online error trigger monitoring and appropriate software for this electronics

    Azimuthal correlation between beauty particles produced in 350 GeV/c π\pi^{-}-Cu interactions

    Get PDF
    Using a sample of 10810^8 triggered events, produced in 350GeV/c350\, \hbox{GeV}/c π\pi^- interactions in a copper target, we have identified 2626 b\=b events. These include 1313 events where the decays of both BB and B\overline{B} are well reconstructed. We measure the azimuthal \hbox{correlation} between beauty particles, and compare our result with predictions based on perturbative QCD

    Complete removal of the lesion as a guidance in the management of patients with breast ductal carcinoma in Situ

    Get PDF
    Background: Considering highly selected patients with ductal carcinoma in situ (DCIS), active surveillance is a valid alternative to surgery. Our study aimed to show the reliability of post-biopsy complete lesion removal, documented by mammogram, as additional criterion to select these patients. Methods: A total of 2173 vacuum‐assisted breast biopsies (VABBs) documented as DCIS were reviewed. Surgery was performed in all cases. We retrospectively collected the reports of post‐ VABB complete lesion removal and the histological results of the biopsy and surgery. We calculated the rate of upgrade of DCIS identified on VABB upon excision for patients with post‐biopsy complete lesion removal and for those showing residual lesion. Results: We observed 2173 cases of DCIS: 408 classified as low‐grade, 1262 as intermediate‐grade, and 503 as high‐grade. The overall upgrading rate to invasive carcinoma was 15.2% (330/2173). The upgrade rate was 8.2% in patients showing mammographically documented complete removal of the lesion and 19% in patients without complete removal. Conclusion: The absence of mammographically documented residual lesion following VABB was found to be associated with a lower upgrading rate of DCIS to invasive carcinoma on surgical excision and should be considered when deciding the proper management DCIS diagnosis

    The cost of large numbers of hypothesis tests on power, effect size and sample size

    Get PDF
    Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing

    Measurement of the branching ratio of the decay Ξ0Σ+μνˉμ\Xi^{0}\rightarrow \Sigma^{+} \mu^{-} \bar{\nu}_{\mu}

    Full text link
    From the 2002 data taking with a neutral kaon beam extracted from the CERN-SPS, the NA48/1 experiment observed 97 Ξ0Σ+μνˉμ\Xi^{0}\rightarrow \Sigma^{+} \mu^{-} \bar{\nu}_{\mu} candidates with a background contamination of 30.8±4.230.8 \pm 4.2 events. From this sample, the BR(Ξ0Σ+μνˉμ\Xi^{0}\rightarrow \Sigma^{+} \mu^{-} \bar{\nu}_{\mu}) is measured to be (2.17±0.32stat±0.17syst)×106(2.17 \pm 0.32_{\mathrm{stat}}\pm 0.17_{\mathrm{syst}})\times10^{-6}
    corecore