26,966 research outputs found

    A classical measure of evidence for general null hypotheses

    Full text link
    In science, the most widespread statistical quantities are perhaps pp-values. A typical advice is to reject the null hypothesis H0H_0 if the corresponding p-value is sufficiently small (usually smaller than 0.05). Many criticisms regarding p-values have arisen in the scientific literature. The main issue is that in general optimal p-values (based on likelihood ratio statistics) are not measures of evidence over the parameter space Θ\Theta. Here, we propose an \emph{objective} measure of evidence for very general null hypotheses that satisfies logical requirements (i.e., operations on the subsets of Θ\Theta) that are not met by p-values (e.g., it is a possibility measure). We study the proposed measure in the light of the abstract belief calculus formalism and we conclude that it can be used to establish objective states of belief on the subsets of Θ\Theta. Based on its properties, we strongly recommend this measure as an additional summary of significance tests. At the end of the paper we give a short listing of possible open problems.Comment: 26 pages, one figure and one table. Corrected versio

    Two-loop self-energy diagrams worked out with NDIM

    Get PDF
    In this work we calculate two two-loop massless Feynman integrals pertaining to self-energy diagrams using NDIM (Negative Dimensional Integration Method). We show that the answer we get is 36-fold degenerate. We then consider special cases of exponents for propagators and the outcoming results compared with known ones obtained via traditional methods.Comment: LaTeX, 10 pages, 2 figures, styles include

    Prescriptionless light-cone integrals

    Get PDF
    Perturbative quantum gauge field theory seen within the perspective of physical gauge choices such as the light-cone entails the emergence of troublesome poles of the type (kn)α(k\cdot n)^{-\alpha} in the Feynman integrals, and these come from the boson field propagator, where α=1,2,...\alpha = 1,2,... and nμn^{\mu} is the external arbitrary four-vector that defines the gauge proper. This becomes an additional hurdle to overcome in the computation of Feynman diagrams, since any graph containing internal boson lines will inevitably produce integrands with denominators bearing the characteristic gauge-fixing factor. How one deals with them has been the subject of research for over decades, and several prescriptions have been suggested and tried in the course of time, with failures and successes. However, a more recent development in this front which applies the negative dimensional technique to compute light-cone Feynman integrals shows that we can altogether dispense with prescriptions to perform the calculations. An additional bonus comes attached to this new technique in that not only it renders the light-cone prescriptionless, but by the very nature of it, can also dispense with decomposition formulas or partial fractioning tricks used in the standard approach to separate pole products of the type (kn)α[(kp)n]β(k\cdot n)^{-\alpha}[(k-p)\cdot n]^{-\beta}, (β=1,2,...)(\beta = 1,2,...). In this work we demonstrate how all this can be done.Comment: 6 pages, no figures, Revtex style, reference [2] correcte

    Negative Dimensional Integration: "Lab Testing" at Two Loops

    Full text link
    Negative dimensional integration method (NDIM) is a technique to deal with D-dimensional Feynman loop integrals. Since most of the physical quantities in perturbative Quantum Field Theory (pQFT) require the ability of solving them, the quicker and easier the method to evaluate them the better. The NDIM is a novel and promising technique, ipso facto requiring that we put it to test in different contexts and situations and compare the results it yields with those that we already know by other well-established methods. It is in this perspective that we consider here the calculation of an on-shell two-loop three point function in a massless theory. Surprisingly this approach provides twelve non-trivial results in terms of double power series. More astonishing than this is the fact that we can show these twelve solutions to be different representations for the same well-known single result obtained via other methods. It really comes to us as a surprise that the solution for the particular integral we are dealing with is twelvefold degenerate.Comment: 10 pages, LaTeX2e, uses style jhep.cls (included

    Maximum Entropy Principle and the Higgs Boson Mass

    Full text link
    A successful connection between Higgs boson decays and the Maximum Entropy Principle is presented. Based on the information theory inference approach we determine the Higgs boson mass as MH=125.04±0.25M_H= 125.04\pm 0.25 GeV, a value fully compatible to the LHC measurement. This is straightforwardly obtained by taking the Higgs boson branching ratios as the target probability distributions of the inference, without any extra assumptions beyond the Standard Model. Yet, the principle can be a powerful tool in the construction of any model affecting the Higgs sector. We give, as an example, the case where the Higgs boson has an extra invisible decay channel. Our findings suggest that a system of Higgs bosons undergoing a collective decay to Standard Model particles is among the most fundamental ones where the Maximum Entropy Principle applies.Comment: Version published in Physica

    Genus Two Partition Functions and Renyi Entropies of Large c CFTs

    Full text link
    We compute genus two partition functions in two dimensional conformal field theories at large central charge, focusing on surfaces that give the third Renyi entropy of two intervals. We compute this for generalized free theories and for symmetric orbifolds, and compare it to the result in pure gravity. We find a new phase transition if the theory contains a light operator of dimension Δ0.19\Delta\leq0.19. This means in particular that unlike the second Renyi entropy, the third one is no longer universal.Comment: 28 pages + Appendice

    Inferences on the Higgs Boson and Axion Masses through a Maximum Entropy Principle

    Full text link
    The Maximum Entropy Principle (MEP) is a method that can be used to infer the value of an unknown quantity in a set of probability functions. In this work we review two applications of MEP: one giving a precise inference of the Higgs boson mass value; and the other one allowing to infer the mass of the axion. In particular, for the axion we assume that it has a decay channel into pairs of neutrinos, in addition to the decay into two photons. The Shannon entropy associated to an initial ensemble of axions decaying into photons and neutrinos is then built for maximization.Comment: Contributed to the 13th Patras Workshop on Axions, WIMPs and WISPs, Thessaloniki, May 15 to 19, 201
    corecore