1,180 research outputs found

    2000 CKM-Triangle Analysis A Critical Review with Updated Experimental Inputs and Theoretical Parameters

    Get PDF
    Within the Standard Model, a review of the current determination of the sides and angles of the CKM unitarity triangle is presented, using experimental constraints from the measurements of |\epsilon_K|, |V_{ub}/V_{cb}|, \Delta m_d and from the limit on \Delta m_s, available in September 2000. Results from the experimental search for {B}^0_s-\bar{B}^0_s oscillations are introduced in the present analysis using the likelihood. Special attention is devoted to the determination of the theoretical uncertainties. The purpose of the analysis is to infer regions where the parameters of interest lie with given probabilities. The BaBar "95 %, C.L. scanning" method is also commented.Comment: 44 pages (revised version

    Intervento

    Get PDF

    Charring effects on stable carbon and nitrogen isotope values on C4 plants: Inferences for archaeological investigations

    Get PDF
    Experimental studies demonstrated that charring affects stable isotope values of plant remains. Therefore, it is necessary to consider the impact of charring to reliably interpret δ13C and δ15N values in archaeobotanical remains before using this approach to reconstruct past water management, paleoclimatic changes, and infer paleodietary patterns. Research so far has focused mostly on C3 plants while the charring effect on C4 plants is less understood. This study explored the effects of charring on δ13C, δ15N, %C, %N, and C:N in grains of two C4 species, Sorghum bicolor (L.) Moench (NADP-ME) and Cenchrus americanus (L.) Morrone (heterotypic synonym Pennisetum glaucum (L.) R.Br.) (NAD-ME), grown under the same controlled environmental conditions (watering, light, atmospheric humidity). Sorghum and pearl millet grains were charred from 1 to 3 h at 200–300 °C. Comparing first the uncharred grains, the results show that sorghum has lower δ15N and higher δ13C values than pearl millet. This evidence is also recorded in the charred grains. The charring experiments indicate that the temperature to which the grains are exposed has a higher impact than time on the preservation, mass loss, %C, %N, C:N, and δ13C and δ15N values. Every 50 °C of increase resulted in significant increases of δ15N (+0.37‰) and of δ13C (+0.06‰) values. Increasing the duration of charring to 3 h resulted in significant changes of δ15N (+0.17‰) and no significant changes for δ13C (−0.04‰) values. The average charring effects estimated in our experiment is 0.27‰ (95% CI between −0.02 and 0.56) for δ15N and −0.18‰ (95% CI between −0.30 and −0.06‰) for δ13C. Considering the average values, our data show that pearl millet is more affected by charring than sorghum; however, according to the standard deviations, sorghum shows a greater variability charring effect than pearl millet. This study provides new information to correctly assessing the isotopic values obtained from ancient C4 crops, providing a charring offset specific for C4 plants. Furthermore, it suggests that NAD-ME and NADP-ME species present isotopic differences under the same growing conditions and this must be taken into account in analytical works on ancient C4 crops.This work was funded by the ERC Staring Grant RAINDROPS (G.A. n 759800) under the Horizon 2020 program of the European Commission. CASEs is a Quality Research Group funded by the Government of Catalonia (SGR00950-2021)

    Bayesian Inference in Processing Experimental Data: Principles and Basic Applications

    Full text link
    This report introduces general ideas and some basic methods of the Bayesian probability theory applied to physics measurements. Our aim is to make the reader familiar, through examples rather than rigorous formalism, with concepts such as: model comparison (including the automatic Ockham's Razor filter provided by the Bayesian approach); parametric inference; quantification of the uncertainty about the value of physical quantities, also taking into account systematic effects; role of marginalization; posterior characterization; predictive distributions; hierarchical modelling and hyperparameters; Gaussian approximation of the posterior and recovery of conventional methods, especially maximum likelihood and chi-square fits under well defined conditions; conjugate priors, transformation invariance and maximum entropy motivated priors; Monte Carlo estimates of expectation, including a short introduction to Markov Chain Monte Carlo methods.Comment: 40 pages, 2 figures, invited paper for Reports on Progress in Physic

    Effects of age and gender on neural correlates of emotion imagery

    Get PDF
    Mental imagery is part of people's own internal processing and plays an important role in everyday life, cognition and pathology. The neural network supporting mental imagery is bottom-up modulated by the imagery content. Here, we examined the complex associations of gender and age with the neural mechanisms underlying emotion imagery. We assessed the brain circuits involved in emotion mental imagery (vs. action imagery), controlled by a letter detection task on the same stimuli, chosen to ensure attention to the stimuli and to discourage imagery, in 91 men and women aged 14–65 years using fMRI. In women, compared with men, emotion imagery significantly increased activation within the right putamen, which is involved in emotional processing. Increasing age, significantly decreased mental imagery-related activation in the left insula and cingulate cortex, areas involved in awareness of ones' internal states, and it significantly decreased emotion verbs-related activation in the left putamen, which is part of the limbic system. This finding suggests a top-down mechanism by which gender and age, in interaction with bottom-up effect of type of stimulus, or directly, can modulate the brain mechanisms underlying mental imagery

    Can Old Galaxies at High Redshifts and Baryon Acoustic Oscillations Constrain H_0?

    Full text link
    A new age-redshift test is proposed in order to constrain H0H_0 with basis on the existence of old high redshift galaxies (OHRG). As should be expected, the estimates of H0H_0 based on the OHRG are heavily dependent on the cosmological description. In the flat concordance model (Λ\LambdaCDM), for example, the value of H0H_0 depends on the mass density parameter ΩM=1−ΩΛ\Omega_M=1 - \Omega_{\Lambda}. Such a degeneracy can be broken trough a joint analysis involving the OHRG and baryon acoustic oscillation (BAO) signature. In the framework of the ΛCDM\Lambda CDM model our joint analysis yields a value of H_0=71^{+4}_{-4}\kms Mpc−1^{-1} (1σ1\sigma) with the best fit density parameter ΩM=0.27±0.03\Omega_M=0.27\pm0.03. Such results are in good agreement with independent studies from the {\it{Hubble Space Telescope}} key project and the recent estimates of WMAP, thereby suggesting that the combination of these two independent phenomena provides an interesting method to constrain the Hubble constant.Comment: 16 pages, 6 figures, 1 tabl

    Neural Network Parametrization of Deep-Inelastic Structure Functions

    Full text link
    We construct a parametrization of deep-inelastic structure functions which retains information on experimental errors and correlations, and which does not introduce any theoretical bias while interpolating between existing data points. We generate a Monte Carlo sample of pseudo-data configurations and we train an ensemble of neural networks on them. This effectively provides us with a probability measure in the space of structure functions, within the whole kinematic region where data are available. This measure can then be used to determine the value of the structure function, its error, point-to-point correlations and generally the value and uncertainty of any function of the structure function itself. We apply this technique to the determination of the structure function F_2 of the proton and deuteron, and a precision determination of the isotriplet combination F_2[p-d]. We discuss in detail these results, check their stability and accuracy, and make them available in various formats for applications.Comment: Latex, 43 pages, 22 figures. (v2) Final version, published in JHEP; Sect.5.2 and Fig.9 improved, a few typos corrected and other minor improvements. (v3) Some inconsequential typos in Tab.1 and Tab 5 corrected. Neural parametrization available at http://sophia.ecm.ub.es/f2neura

    What have we learned from antiproton proton scattering?

    Get PDF
    From recent charge exchange measurements in the extreme forward direction, an independent and precise determination of the pion nucleon coupling constant is possible. This determination has reopened the debate on the value of this fundamental coupling constant of nuclear physics. Precise measurements of charge exchange observables at forward angles below 900 MeV/c would also give a better understanding of the long range part of the two-pion exchange potential. For example, the confirmation of the coherence of the tensor forces from the pion exchange and the isovector two-pion exchange would be very valuable. With the present data first attempts at an \NbarN partial wave analysis have been made where, as in nucleon nucleon scattering, the antinucleon nucleon high J partial waves are mainly given by one-pion exchange. Finally a recent \pbarp atomic cascade calculation and the fraction of P-state annihilation in gas targets is commented on.Comment: 10 pages, Latex, to be published in Nucl. Phy

    Statistical coverage for supersymmetric parameter estimation: a case study with direct detection of dark matter

    Full text link
    Models of weak-scale supersymmetry offer viable dark matter (DM) candidates. Their parameter spaces are however rather large and complex, such that pinning down the actual parameter values from experimental data can depend strongly on the employed statistical framework and scanning algorithm. In frequentist parameter estimation, a central requirement for properly constructed confidence intervals is that they cover true parameter values, preferably at exactly the stated confidence level when experiments are repeated infinitely many times. Since most widely-used scanning techniques are optimised for Bayesian statistics, one needs to assess their abilities in providing correct confidence intervals in terms of the statistical coverage. Here we investigate this for the Constrained Minimal Supersymmetric Standard Model (CMSSM) when only constrained by data from direct searches for dark matter. We construct confidence intervals from one-dimensional profile likelihoods and study the coverage by generating several pseudo-experiments for a few benchmark sets of pseudo-true parameters. We use nested sampling to scan the parameter space and evaluate the coverage for the benchmarks when either flat or logarithmic priors are imposed on gaugino and scalar mass parameters. The sampling algorithm has been used in the configuration usually adopted for exploration of the Bayesian posterior. We observe both under- and over-coverage, which in some cases vary quite dramatically when benchmarks or priors are modified. We show how most of the variation can be explained as the impact of explicit priors as well as sampling effects, where the latter are indirectly imposed by physicality conditions. For comparison, we also evaluate the coverage for Bayesian credible intervals, and observe significant under-coverage in those cases.Comment: 30 pages, 5 figures; v2 includes major updates in response to referee's comments; extra scans and tables added, discussion expanded, typos corrected; matches published versio
    • …
    corecore