146 research outputs found

    Dilaton stabilization by massive fermion matter

    Full text link
    The study started in a former work about the Dilaton mean field stabilization thanks to the effective potential generated by the existence of massive fermions, is here extended. Three loop corrections are evaluated in addition to the previously calculated two loop terms. The results indicate that the Dilaton vacuum field tend to be fixed at a high value close to the Planck scale, in accordance with the need for predicting Einstein gravity from string theory. The mass of the Dilaton is evaluated to be also a high value close to the Planck mass, which implies the absence of Dilaton scalar signals in modern cosmological observations. These properties arise when the fermion mass is chosen to be either at a lower bound corresponding to the top quark mass, or alternatively, at a very much higher value assumed to be in the grand unification energy range. One of the three 3-loop terms is exactly evaluated in terms of Master integrals. The other two graphs are however evaluated in their leading logarithm correction in the perturbative expansion. The calculation of the non leading logarithmic contribution and the inclusion of higher loops terms could made more precise the numerical estimates of the vacuum field value and masses, but seemingly are expected not to change the qualitative behavior obtained. The validity of the here employed Yukawa model approximation is argued for small value of the fermion masses with respect to the Planck one. A correction to the two loop calculation done in the previous work is here underlined.Comment: 18 pages, 5 figures, the study was extended and corrections on the former calculations and redaction were done. The paper had been accepted for publication in "Astrophysics and Space Science

    Review of Speculative "Disaster Scenarios" at RHIC

    Get PDF
    We discuss speculative disaster scenarios inspired by hypothetical new fundamental processes that might occur in high energy relativistic heavy ion collisions. We estimate the parameters relevant to black hole production; we find that they are absurdly small. We show that other accelerator and (especially) cosmic ray environments have already provided far more auspicious opportunities for transition to a new vacuum state, so that existing observations provide stringent bounds. We discuss in most detail the possibility of producing a dangerous strangelet. We argue that four separate requirements are necessary for this to occur: existence of large stable strangelets, metastability of intermediate size strangelets, negative charge for strangelets along the stability line, and production of intermediate size strangelets in the heavy ion environment. We discuss both theoretical and experimental reasons why each of these appears unlikely; in particular, we know of no plausible suggestion for why the third or especially the fourth might be true. Given minimal physical assumptions the continued existence of the Moon, in the form we know it, despite billions of years of cosmic ray exposure, provides powerful empirical evidence against the possibility of dangerous strangelet production.Comment: 28 pages, REVTeX; minor revisions for publication (Reviews of Modern Physics, ca. Oct. 2000); email to [email protected]

    Magnetic structure of CeRhIn_5 as a function of pressure and temperature

    Full text link
    We report magnetic neutron-diffraction and electrical resistivity studies on single crystals of the heavy-fermion antiferromagnet CeRhIn5_{5} at pressures up to 2.3 GPa. These experiments show that the staggered moment of Ce and the incommensurate magnetic structure change weakly with applied pressure up to 1.63 GPa, where resistivity, specific heat and NQR measurements confirm the presence of bulk superconductivity. This work places new constraints on an interpretation of the relationship between antiferromagnetism and unconventional superconductivity in CeRhIn5_{5}.Comment: 6 pages, 6 figures, submitted to Phys. Rev.

    Chiral bosonization for non-commutative fields

    Full text link
    A model of chiral bosons on a non-commutative field space is constructed and new generalized bosonization (fermionization) rules for these fields are given. The conformal structure of the theory is characterized by a level of the Kac-Moody algebra equal to (1+θ2)(1+ \theta^2) where θ\theta is the non-commutativity parameter and chiral bosons living in a non-commutative fields space are described by a rational conformal field theory with the central charge of the Virasoro algebra equal to 1. The non-commutative chiral bosons are shown to correspond to a free fermion moving with a speed equal to c=c1+θ2 c^{\prime} = c \sqrt{1+\theta^2} where cc is the speed of light. Lorentz invariance remains intact if cc is rescaled by ccc \to c^{\prime}. The dispersion relation for bosons and fermions, in this case, is given by ω=ck\omega = c^{\prime} | k|.Comment: 16 pages, JHEP style, version published in JHE

    The AdS/QCD Correspondence: Still Undelivered

    Full text link
    We consider the particle spectrum and event shapes in large N gauge theories in different regimes of the short-distance 't Hooft coupling, lambda. The mesons in the small lambda limit should have a Regge spectrum in order to agree with perturbation theory, while generically the large lambda theories with gravity duals produce spectra reminiscent of KK modes. We argue that these KK-like states are qualitatively different from QCD modes: they are deeply bound states which are sensitive to short distance interactions rather than the flux tube-like states expected in asymptotically free, confining gauge theories. In addition, we also find that the characteristic event shapes for the large lambda theories with gravity duals are close to spherical, very different from QCD-like (small lambda, small N) and Nambu-Goto-like (small lambda, large N) theories which have jets. This observation is in agreement with the conjecture of Strassler on event shapes in large 't Hooft coupling theories, which was recently proved by Hofman and Maldacena for the conformal case. This conclusion does not change even when considering soft-wall backgrounds in the gravity dual. The picture that emerges is the following: theories with small and large lambda are qualitatively different, while theories with small and large N are qualitatively similar. Thus it seems that it is the relative smallness of the 't Hooft coupling in QCD that prevents a reliable AdS/QCD correspondence from emerging, and that reproducing characteristic QCD-like behavior will require genuine stringy dynamics to be incorporated into any putative dual theory.Comment: 32 pages, 15 figures; references added, minor changes, history clarifie

    Quantum walks: a comprehensive review

    Full text link
    Quantum walks, the quantum mechanical counterpart of classical random walks, is an advanced tool for building quantum algorithms that has been recently shown to constitute a universal model of quantum computation. Quantum walks is now a solid field of research of quantum computation full of exciting open problems for physicists, computer scientists, mathematicians and engineers. In this paper we review theoretical advances on the foundations of both discrete- and continuous-time quantum walks, together with the role that randomness plays in quantum walks, the connections between the mathematical models of coined discrete quantum walks and continuous quantum walks, the quantumness of quantum walks, a summary of papers published on discrete quantum walks and entanglement as well as a succinct review of experimental proposals and realizations of discrete-time quantum walks. Furthermore, we have reviewed several algorithms based on both discrete- and continuous-time quantum walks as well as a most important result: the computational universality of both continuous- and discrete- time quantum walks.Comment: Paper accepted for publication in Quantum Information Processing Journa

    Graph Neural Networks for low-energy event classification & reconstruction in IceCube

    Get PDF
    IceCube, a cubic-kilometer array of optical sensors built to detect atmospheric and astrophysical neutrinos between 1 GeV and 1 PeV, is deployed 1.45 km to 2.45 km below the surface of the ice sheet at the South Pole. The classification and reconstruction of events from the in-ice detectors play a central role in the analysis of data from IceCube. Reconstructing and classifying events is a challenge due to the irregular detector geometry, inhomogeneous scattering and absorption of light in the ice and, below 100 GeV, the relatively low number of signal photons produced per event. To address this challenge, it is possible to represent IceCube events as point cloud graphs and use a Graph Neural Network (GNN) as the classification and reconstruction method. The GNN is capable of distinguishing neutrino events from cosmic-ray backgrounds, classifying different neutrino event types, and reconstructing the deposited energy, direction and interaction vertex. Based on simulation, we provide a comparison in the 1 GeV–100 GeV energy range to the current state-of-the-art maximum likelihood techniques used in current IceCube analyses, including the effects of known systematic uncertainties. For neutrino event classification, the GNN increases the signal efficiency by 18% at a fixed background rate, compared to current IceCube methods. Alternatively, the GNN offers a reduction of the background (i.e. false positive) rate by over a factor 8 (to below half a percent) at a fixed signal efficiency. For the reconstruction of energy, direction, and interaction vertex, the resolution improves by an average of 13%–20% compared to current maximum likelihood techniques in the energy range of 1 GeV–30 GeV. The GNN, when run on a GPU, is capable of processing IceCube events at a rate nearly double of the median IceCube trigger rate of 2.7 kHz, which opens the possibility of using low energy neutrinos in online searches for transient events.Peer Reviewe

    Experimental progress in positronium laser physics

    Get PDF

    The species-area relationship: new challenges for an old pattern

    Get PDF
    The species-area relationship (i.e., the relationship between area and the number of species found in that area) is one of longest and most frequently studied patterns in nature. Yet there remain some important and interesting questions on the nature of this relationship, its causality, quantification and application for both ecologists and conservation biologists.Yeshttps://us.sagepub.com/en-us/nam/manuscript-submission-guideline

    Uma visão sobre qualidade do solo

    Full text link
    corecore