11,471 research outputs found

    Self-Consistency Requirements of the Renormalization Group for Setting the Renormalization Scale

    Full text link
    In conventional treatments, predictions from fixed-order perturbative QCD calculations cannot be fixed with certainty due to ambiguities in the choice of the renormalization scale as well as the renormalization scheme. In this paper we present a general discussion of the constraints of the renormalization group (RG) invariance on the choice of the renormalization scale. We adopt the RG based equations, which incorporate the scheme parameters, for a general exposition of RG invariance, since they simultaneously express the invariance of physical observables under both the variation of the renormalization scale and the renormalization scheme parameters. We then discuss the self-consistency requirements of the RG, such as reflexivity, symmetry, and transitivity, which must be satisfied by the scale-setting method. The Principle of Minimal Sensitivity (PMS) requires the slope of the approximant of an observable to vanish at the renormalization point. This criterion provides a scheme-independent estimation, but it violates the symmetry and transitivity properties of the RG and does not reproduce the Gell-Mann-Low scale for QED observables. The Principle of Maximum Conformality (PMC) satisfies all of the deductions of the RG invariance - reflectivity, symmetry, and transitivity. Using the PMC, all non-conformal {βiR}\{\beta^{\cal R}_i\}-terms (R{\cal R} stands for an arbitrary renormalization scheme) in the perturbative expansion series are summed into the running coupling, and one obtains a unique, scale-fixed, scheme-independent prediction at any finite order. The PMC scales and the resulting finite-order PMC predictions are both to high accuracy independent of the choice of initial renormalization scale, consistent with RG invariance. [...More in the text...]Comment: 15 pages, 4 figures. References updated. To be published in Phys.Rev.

    The Renormalization Scale-Setting Problem in QCD

    Full text link
    A key problem in making precise perturbative QCD predictions is to set the proper renormalization scale of the running coupling. The conventional scale-setting procedure assigns an arbitrary range and an arbitrary systematic error to fixed-order pQCD predictions. In fact, this {\it ad hoc} procedure gives results which depend on the choice of the renormalization scheme, and it is in conflict with the standard scale-setting procedure used in QED. Predictions for physical results should be independent of the choice of scheme or other theoretical conventions. We review current ideas and points of view on how to deal with the renormalization scale ambiguity and show how to obtain renormalization scheme- and scale- independent estimates. We begin by introducing the renormalization group (RG) equation and an extended version, which expresses the invariance of physical observables under both the renormalization scheme and scale-parameter transformations. The RG equation provides a convenient way for estimating the scheme- and scale- dependence of a physical process. We then discuss self-consistency requirements of the RG equations, such as reflexivity, symmetry, and transitivity, which must be satisfied by a scale-setting method. Four typical scale setting methods suggested in the literature, {\it i.e.,} the Fastest Apparent Convergence (FAC) criterion, the Principle of Minimum Sensitivity (PMS), the Brodsky-Lepage-Mackenzie method (BLM), and the Principle of Maximum Conformality (PMC), are introduced. Basic properties and their applications are discussed. We pay particular attention to the PMC, which satisfies all of the requirements of RG invariance...... [full Abstract is in the paper].Comment: 75 pages, 19 figures. Review article to be published in Prog. Part. Nucl. Phy

    Bayesian model comparison in cosmology with Population Monte Carlo

    Full text link
    We use Bayesian model selection techniques to test extensions of the standard flat LambdaCDM paradigm. Dark-energy and curvature scenarios, and primordial perturbation models are considered. To that end, we calculate the Bayesian evidence in favour of each model using Population Monte Carlo (PMC), a new adaptive sampling technique which was recently applied in a cosmological context. The Bayesian evidence is immediately available from the PMC sample used for parameter estimation without further computational effort, and it comes with an associated error evaluation. Besides, it provides an unbiased estimator of the evidence after any fixed number of iterations and it is naturally parallelizable, in contrast with MCMC and nested sampling methods. By comparison with analytical predictions for simulated data, we show that our results obtained with PMC are reliable and robust. The variability in the evidence evaluation and the stability for various cases are estimated both from simulations and from data. For the cases we consider, the log-evidence is calculated with a precision of better than 0.08. Using a combined set of recent CMB, SNIa and BAO data, we find inconclusive evidence between flat LambdaCDM and simple dark-energy models. A curved Universe is moderately to strongly disfavoured with respect to a flat cosmology. Using physically well-motivated priors within the slow-roll approximation of inflation, we find a weak preference for a running spectral index. A Harrison-Zel'dovich spectrum is weakly disfavoured. With the current data, tensor modes are not detected; the large prior volume on the tensor-to-scalar ratio r results in moderate evidence in favour of r=0. [Abridged]Comment: 11 pages, 6 figures. Matches version accepted for publication by MNRA

    Reconsideration of the QCD corrections to the ηc\eta_c decays into light hadrons using the principle of maximum conformality

    Full text link
    In the paper, we analyze the ηc\eta_c decays into light hadrons at the next-to-leading order QCD corrections by applying the principle of maximum conformality (PMC). The relativistic correction at the O(αsv2){\cal{O}}(\alpha_s v^2)-order level has been included in the discussion, which gives about 10%10\% contribution to the ratio RR. The PMC, which satisfies the renormalization group invariance, is designed to obtain a scale-fixed and scheme-independent prediction at any fixed order. To avoid the confusion of treating nfn_f-terms, we transform the usual MS‾\overline{\rm MS} pQCD series into the one under the minimal momentum space subtraction scheme. To compare with the prediction under conventional scale setting, RConv,mMOM−r=(4.12−0.28+0.30)×103R_{\rm{Conv,mMOM}-r}= \left(4.12^{+0.30}_{-0.28}\right)\times10^3, after applying the PMC, we obtain RPMC,mMOM−r=(6.09−0.55+0.62)×103R_{\rm PMC,mMOM-r}=\left(6.09^{+0.62}_{-0.55}\right) \times10^3, where the errors are squared averages of the ones caused by mcm_c and ΛmMOM\Lambda_{\rm mMOM}. The PMC prediction agrees with the recent PDG value within errors, i.e. Rexp=(6.3±0.5)×103R^{\rm exp}=\left(6.3\pm0.5\right)\times10^3. Thus we think the mismatching of the prediction under conventional scale-setting with the data is due to improper choice of scale, which however can be solved by using the PMC.Comment: 5 pages, 2 figure

    Gaussianisation for fast and accurate inference from cosmological data

    Full text link
    We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box--Cox transformations and generalisations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in MCMC samples. Further, the model evidence integral (i.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianising transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianise the joint distribution of data from weak lensing and baryon acoustic oscillations (BAO), for different cosmological models, and find a preference for flat Λ\LambdaCDM. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.Comment: 14 pages, 9 figure
    • …
    corecore