9,531 research outputs found

    Handedness asymmetry of spiral galaxies with z<0.3 shows cosmic parity violation and a dipole axis

    Full text link
    A dataset of 126,501 spiral galaxies taken from Sloan Digital Sky Survey was used to analyze the large-scale galaxy handedness in different regions of the local universe. The analysis was automated by using a transformation of the galaxy images to their radial intensity plots, which allows automatic analysis of the galaxy spin and can therefore be used to analyze a large galaxy dataset. The results show that the local universe (z<0.3) is not isotropic in terms of galaxy spin, with probability P<5.8*10^-6 of such asymmetry to occur by chance. The handedness asymmetries exhibit an approximate cosine dependence, and the most likely dipole axis was found at RA=132, DEC=32 with 1 sigma error range of 107 to 179 degrees for the RA. The probability of such axis to occur by chance is P<1.95*10^-5 . The amplitude of the handedness asymmetry reported in this paper is generally in agreement with Longo, but the statistical significance is improved by a factor of 40, and the direction of the axis disagrees somewhat.Comment: 8 pages, 6 figures. Accepted for publication in Physics Letters

    A Gauge-Fixing Action for Lattice Gauge Theories

    Get PDF
    We present a lattice gauge-fixing action SgfS_{gf} with the following properties: (a) SgfS_{gf} is proportional to the trace of (∑μ∂μAμ)2(\sum_\mu \partial_\mu A_\mu)^2, plus irrelevant terms of dimension six and higher; (b) SgfS_{gf} has a unique absolute minimum at Ux,μ=IU_{x,\mu}=I. Noting that the gauge-fixed action is not BRST invariant on the lattice, we discuss some important aspects of the phase diagram.Comment: 13 pages, Latex, improved presentation, no change in result

    On the Complexity of Bandit Linear Optimization

    Get PDF
    We study the attainable regret for online linear optimization problems with bandit feedback, where unlike the full-information setting, the player can only observe its own loss rather than the full loss vector. We show that the price of bandit information in this setting can be as large as dd, disproving the well-known conjecture that the regret for bandit linear optimization is at most d\sqrt{d} times the full-information regret. Surprisingly, this is shown using "trivial" modifications of standard domains, which have no effect in the full-information setting. This and other results we present highlight some interesting differences between full-information and bandit learning, which were not considered in previous literature
    • …
    corecore