415 research outputs found

    Form Factors of Baryons in a Confining and Covariant Diquark-Quark Model

    Get PDF
    We treat baryons as bound states of scalar or axialvector diquarks and a constituent quark which interact through quark exchange. This description results as an approximation to the relativistic Faddeev equation for three quarks which yields an effective Bethe-Salpeter equation. Octet and decuplet masses and fully four-dimensional wave functions have been computed for two cases: assuming an essentially pointlike diquark on the one hand, and a diquark with internal structure on the other hand. Whereas the differences in the mass spectrum are fairly small, the nucleon electromagnetic form factors are greatly improved assuming a diquark with structure. First calculations to the pion-nucleon form factor also suggest improvements.Comment: 11 pages, uses 'aipproc.sty'. Talk given by M.O. at the Workshop "Effective Theories of Low Energy QCD", Coimbra, Portugal, Sep 10-15 199

    Sterile neutrino dark matter via GeV-scale leptogenesis?

    Get PDF
    It has been proposed that in a part of the parameter space of the Standard Model completed by three generations of keV...GeV right-handed neutrinos, neutrino masses, dark matter, and baryon asymmetry can be accounted for simultaneously. Here we numerically solve the evolution equations describing the cosmology of this scenario in a 1+2 flavour situation at temperatures T5T \le 5 GeV, taking as initial conditions maximal lepton asymmetries produced dynamically at higher temperatures, and accounting for late entropy and lepton asymmetry production as the heavy flavours fall out of equilibrium and decay. For 7 keV dark matter mass and other parameters tuned favourably, 10%\sim 10\% of the observed abundance can be generated. Possibilities for increasing the abundance are enumerated.Comment: 20 page

    Laterality and performance in combat sports

    Get PDF
    P. 167-177La literatura ha demostrado una relación entre la lateralidad y una representación excesiva de atletas zurdos en ciertos deportes, y especialmente en deportes uno contra uno, como el judo, el tenis, el boxeo o la esgrima; La explicación principal se ha atribuido a una mayor probabilidad de éxito. Algunos autores lo han explicado a través de una hipótesis de superioridad genética o innata, sin embargo otros defienden la hipótesis de la ventaja estratégica. El objetivo del estudio es una visión general sobre la lateralidad, el éxito deportivo, la representación excesiva de atletas dominantes de izquierda que ejecutan técnicas y la posibilidad de modular esa representación excesiva a través del entrenamiento y basada en hipótesis de selección negativa dependiente de la frecuencia, dado que en deportes como esgrima, boxeo o judo, se han desarrollado diseños tácticos y acciones de entrenamiento basadas en el lado predominante del oponente mientras se ejecutan habilidades. Se plantea la hipótesis de que si existe algún tipo de relación entre la lateralidad y el éxito deportivo, y se ha adquirido la lateralidad que ejecuta las habilidades deportivas, entonces puede modificarse mediante diferentes metodologías de aprendizaje y / o entrenamiento; Uno de ellos se basa en procesos de transferencia bilateral de habilidades motoras, pero carece de investigación experimental. Sugerimos que la noción de crear o hacer atletas desde la perspectiva de la preferencia lateral al correr con habilidades deportivas y en comportamientos deportivos basados en la lateralidad, podría modificar la hipótesis de selección dependiente de la frecuencia, especialmente en ciertos deportesS

    N-body methods for relativistic cosmology

    Full text link
    We present a framework for general relativistic N-body simulations in the regime of weak gravitational fields. In this approach, Einstein's equations are expanded in terms of metric perturbations about a Friedmann-Lema\^itre background, which are assumed to remain small. The metric perturbations themselves are only kept to linear order, but we keep their first spatial derivatives to second order and treat their second spatial derivatives as well as sources of stress-energy fully non-perturbatively. The evolution of matter is modelled by an N-body ensemble which can consist of free-streaming nonrelativistic (e.g. cold dark matter) or relativistic particle species (e.g. cosmic neutrinos), but the framework is fully general and also allows for other sources of stress-energy, in particular additional relativistic sources like modified-gravity models or topological defects. We compare our method with the traditional Newtonian approach and argue that relativistic methods are conceptually more robust and flexible, at the cost of a moderate increase of numerical difficulty. However, for a LambdaCDM cosmology, where nonrelativistic matter is the only source of perturbations, the relativistic corrections are expected to be small. We quantify this statement by extracting post-Newtonian estimates from Newtonian N-body simulations.Comment: 30 pages, 3 figures. Invited contribution to a Classical and Quantum Gravity focus issue on "Relativistic Effects in Cosmology", edited by Kazuya Koyam

    Precision study of GeV-scale resonant leptogenesis

    Get PDF
    Low-scale leptogenesis is most efficient in the limit of an extreme mass degeneracy of right-handed neutrino flavours. Two variants of this situation are of particular interest: large neutrino Yukawa couplings, which boost the prospects of experimental scrutiny, and small ones, which may lead to large lepton asymmetries surviving down to T < 5 GeV. We study benchmarks of these cases within a "complete" framework which tracks both helicity states of right-handed neutrinos as well as their kinetic non-equilibrium, and includes a number of effects not accounted for previously. For two right-handed flavours with GeV-scale masses, Yukawa couplings up to h0.7×105|h| \sim 0.7 \times 10^{-5} are found to be viable for baryogenesis, with ΔM/M108\Delta M/M \sim 10^{-8} as the optimal degeneracy. Late-time lepton asymmetries are most favourably produced with ΔM/M1011\Delta M/M \sim 10^{-11}. We show that the system reaches a stationary state at T < 15 GeV, in which lepton asymmetries can be more than 10310^3 times larger than the baryon asymmetry, reach flavour equilibrium, and balance against helicity asymmetries.Comment: 43 pages. v2: improvements in presentation, published versio

    Gauss’ Law and string-localized quantum field theory

    Get PDF
    The quantum Gauss Law as an interacting field equation is a prominent feature of QED with eminent impact on its algebraic and superselection structure. It forces charged particles to be accompanied by “photon clouds” that cannot be realized in the Fock space, and prevents them from having a sharp mass [7, 19]. Because it entails the possibility of “measurement of charges at a distance”, it is well-known to be in conflict with locality of charged fields in a Hilbert space [3, 17]. We show how a new approach to QED advocated in [25, 26, 30, 31] that avoids indefinite metric and ghosts, can secure causality and achieve Gauss’ Law along with all its nontrivial consequences. We explain why this is not at variance with recent results in [8]

    Broad Histogram: An Overview

    Full text link
    The Broad Histogram is a method allowing the direct calculation of the energy degeneracy g(E)g(E). This quantity is independent of thermodynamic concepts such as thermal equilibrium. It only depends on the distribution of allowed (micro) states along the energy axis, but not on the energy changes between the system and its environment. Once one has obtained g(E)g(E), no further effort is needed in order to consider different environment conditions, for instance, different temperatures, for the same system. The method is based on the exact relation between g(E)g(E) and the microcanonical averages of certain macroscopic quantities NupN^{\rm up} and NdnN^{\rm dn}. For an application to a particular problem, one needs to choose an adequate instrument in order to determine the averages and and , as functions of energy. Replacing the usual fixed-temperature canonical by the fixed-energy microcanonical ensemble, new subtle concepts emerge. The temperature, for instance, is no longer an external parameter controlled by the user, all canonical averages being functions of this parameter. Instead, the microcanonical temperature Tm(E)T_{m}(E) is a function of energy defined from g(E)g(E) itself, being thus an {\bf internal} (environment independent) characteristic of the system. Accordingly, all microcanonical averages are functions of EE. The present text is an overview of the method. Some features of the microcanonical ensemble are also discussed, as well as some clues towards the definition of efficient Monte Carlo microcanonical sampling rules.Comment: 32 pages, tex, 3 PS figure
    corecore