1,134 research outputs found

    Constraints on small-scale cosmological perturbations from gamma-ray searches for dark matter

    Full text link
    Events like inflation or phase transitions can produce large density perturbations on very small scales in the early Universe. Probes of small scales are therefore useful for e.g. discriminating between inflationary models. Until recently, the only such constraint came from non-observation of primordial black holes (PBHs), associated with the largest perturbations. Moderate-amplitude perturbations can collapse shortly after matter-radiation equality to form ultracompact minihalos (UCMHs) of dark matter, in far greater abundance than PBHs. If dark matter self-annihilates, UCMHs become excellent targets for indirect detection. Here we discuss the gamma-ray fluxes expected from UCMHs, the prospects of observing them with gamma-ray telescopes, and limits upon the primordial power spectrum derived from their non-observation by the Fermi Large Area Space Telescope.Comment: 4 pages, 3 figures. To appear in J Phys Conf Series (Proceedings of TAUP 2011, Munich

    Bringing Order to Special Cases of Klee's Measure Problem

    Full text link
    Klee's Measure Problem (KMP) asks for the volume of the union of n axis-aligned boxes in d-space. Omitting logarithmic factors, the best algorithm has runtime O*(n^{d/2}) [Overmars,Yap'91]. There are faster algorithms known for several special cases: Cube-KMP (where all boxes are cubes), Unitcube-KMP (where all boxes are cubes of equal side length), Hypervolume (where all boxes share a vertex), and k-Grounded (where the projection onto the first k dimensions is a Hypervolume instance). In this paper we bring some order to these special cases by providing reductions among them. In addition to the trivial inclusions, we establish Hypervolume as the easiest of these special cases, and show that the runtimes of Unitcube-KMP and Cube-KMP are polynomially related. More importantly, we show that any algorithm for one of the special cases with runtime T(n,d) implies an algorithm for the general case with runtime T(n,2d), yielding the first non-trivial relation between KMP and its special cases. This allows to transfer W[1]-hardness of KMP to all special cases, proving that no n^{o(d)} algorithm exists for any of the special cases under reasonable complexity theoretic assumptions. Furthermore, assuming that there is no improved algorithm for the general case of KMP (no algorithm with runtime O(n^{d/2 - eps})) this reduction shows that there is no algorithm with runtime O(n^{floor(d/2)/2 - eps}) for any of the special cases. Under the same assumption we show a tight lower bound for a recent algorithm for 2-Grounded [Yildiz,Suri'12].Comment: 17 page

    Disentangling Instrumental Features of the 130 GeV Fermi Line

    Full text link
    We study the instrumental features of photons from the peak observed at EÎł=130E_\gamma=130 GeV in the spectrum of Fermi-LAT data. We use the {\sc sPlots} algorithm to reconstruct -- seperately for the photons in the peak and for background photons -- the distributions of incident angles, the recorded time, features of the spacecraft position, the zenith angles, the conversion type and details of the energy and direction reconstruction. The presence of a striking feature or cluster in such a variable would suggest an instrumental cause for the peak. In the publically available data, we find several suggestive features which may inform further studies by instrumental experts, though the size of the signal sample is too small to draw statistically significant conclusions.Comment: 9 pages, 22 figures; this version includes additional variables, study of stat sensitivity, and modification to the chi-sq calculatio

    Cosmological constraints on dark matter models with velocity-dependent annihilation cross section

    Full text link
    We derive cosmological constraints on the annihilation cross section of dark matter with velocity-dependent structure, motivated by annihilating dark matter models through Sommerfeld or Breit-Wigner enhancement mechanisms. In models with annihilation cross section increasing with decreasing dark matter velocity, big-bang nucleosynthesis and cosmic microwave background give stringent constraints.Comment: 23 pages, 9 figures; Added reference

    Seen and unseen tidal caustics in the Andromeda galaxy

    Full text link
    Indirect detection of high-energy particles from dark matter interactions is a promising avenue for learning more about dark matter, but is hampered by the frequent coincidence of high-energy astrophysical sources of such particles with putative high-density regions of dark matter. We calculate the boost factor and gamma-ray flux from dark matter associated with two shell-like caustics of luminous tidal debris recently discovered around the Andromeda galaxy, under the assumption that dark matter is its own supersymmetric antiparticle. These shell features could be a good candidate for indirect detection of dark matter via gamma rays because they are located far from the primary confusion sources at the galaxy's center, and because the shapes of the shells indicate that most of the mass has piled up near apocenter. Using a numerical estimator specifically calibrated to estimate densities in N-body representations with sharp features and a previously determined N-body model of the shells, we find that the largest boost factors do occur in the shells but are only a few percent. We also find that the gamma-ray flux is an order of magnitude too low to be detected with Fermi for likely dark matter parameters, and about 2 orders of magnitude less than the signal that would have come from the dwarf galaxy that produces the shells in the N-body model. We further show that the radial density profiles and relative radial spacing of the shells, in either dark or luminous matter, is relatively insensitive to the details of the potential of the host galaxy but depends in a predictable way on the velocity dispersion of the progenitor galaxy.Comment: ApJ accepte

    Thermal decoupling and the smallest subhalo mass in dark matter models with Sommerfeld-enhanced annihilation rates

    Full text link
    We consider dark matter consisting of weakly interacting massive particles (WIMPs) and revisit in detail its thermal evolution in the early universe, with a particular focus on models where the annihilation rate is enhanced by the Sommerfeld effect. After chemical decoupling, or freeze-out, dark matter no longer annihilates but is still kept in local thermal equilibrium due to scattering events with the much more abundant standard model particles. During kinetic decoupling, even these processes stop to be effective, which eventually sets the scale for a small-scale cutoff in the matter density fluctuations. Afterwards, the WIMP temperature decreases more quickly than the heat bath temperature, which causes dark matter to reenter an era of annihilation if the cross-section is enhanced by the Sommerfeld effect. Here, we give a detailed and self-consistent description of these effects. As an application, we consider the phenomenology of simple leptophilic models that have been discussed in the literature and find that the relic abundance can be affected by as much two orders of magnitude or more. We also compute the mass of the smallest dark matter subhalos in these models and find it to be in the range of about 10^{-10} to 10 solar masses; even much larger cutoff values are possible if the WIMPs couple to force carriers lighter than about 100 MeV. We point out that a precise determination of the cutoff mass allows to infer new limits on the model parameters, in particular from gamma-ray observations of galaxy clusters, that are highly complementary to existing constraints from g-2 or beam dump experiments.Comment: minor changes to match published versio

    A Tentative Gamma-Ray Line from Dark Matter Annihilation at the Fermi Large Area Telescope

    Full text link
    The observation of a gamma-ray line in the cosmic-ray fluxes would be a smoking-gun signature for dark matter annihilation or decay in the Universe. We present an improved search for such signatures in the data of the Fermi Large Area Telescope (LAT), concentrating on energies between 20 and 300 GeV. Besides updating to 43 months of data, we use a new data-driven technique to select optimized target regions depending on the profile of the Galactic dark matter halo. In regions close to the Galactic center, we find a 4.6 sigma indication for a gamma-ray line at 130 GeV. When taking into account the look-elsewhere effect the significance of the observed excess is 3.2 sigma. If interpreted in terms of dark matter particles annihilating into a photon pair, the observations imply a dark matter mass of 129.8\pm2.4^{+7}_{-13} GeV and a partial annihilation cross-section of = 1.27\pm0.32^{+0.18}_{-0.28} x 10^-27 cm^3 s^-1 when using the Einasto dark matter profile. The evidence for the signal is based on about 50 photons; it will take a few years of additional data to clarify its existence.Comment: 23 pages, 9 figures, 3 tables; extended discussion; matches published versio

    Hunting WIMPs with LISA: Correlating dark matter and gravitational wave signals

    Full text link
    The thermal freeze-out mechanism in its classical form is tightly connected to physics beyond the Standard Model around the electroweak scale, which has been the target of enormous experimental efforts. In this work we study a dark matter model in which freeze-out is triggered by a strong first-order phase transition in a dark sector, and show that this phase transition must also happen close to the electroweak scale, i.e. in the temperature range relevant for gravitational wave searches with the LISA mission. Specifically, we consider the spontaneous breaking of a U(1)′U(1)^\prime gauge symmetry through the vacuum expectation value of a scalar field, which generates the mass of a fermionic dark matter candidate that subsequently annihilates into dark Higgs and gauge bosons. In this set-up the peak frequency of the gravitational wave background is tightly correlated with the dark matter relic abundance, and imposing the observed value for the latter implies that the former must lie in the milli-Hertz range. A peculiar feature of our set-up is that the dark sector is not necessarily in thermal equilibrium with the Standard Model during the phase transition, and hence the temperatures of the two sectors evolve independently. Nevertheless, the requirement that the universe does not enter an extended period of matter domination after the phase transition, which would strongly dilute any gravitational wave signal, places a lower bound on the portal coupling that governs the entropy transfer between the two sectors. As a result, the predictions for the peak frequency of gravitational waves in the LISA band are robust, while the amplitude can change depending on the initial dark sector temperature.Comment: 29 pages, 12 figures + appendice

    Local Optimal Sets and Bounded Archiving on Multi-objective NK-Landscapes with Correlated Objectives

    Get PDF
    The properties of local optimal solutions in multi-objective combinatorial optimization problems are crucial for the effectiveness of local search algorithms, particularly when these algorithms are based on Pareto dominance. Such local search algorithms typically return a set of mutually nondominated Pareto local optimal (PLO) solutions, that is, a PLO-set. This paper investigates two aspects of PLO-sets by means of experiments with Pareto local search (PLS). First, we examine the impact of several problem characteristics on the properties of PLO-sets for multi-objective NK-landscapes with correlated objectives. In particular, we report that either increasing the number of objectives or decreasing the correlation between objectives leads to an exponential increment on the size of PLO-sets, whereas the variable correlation has only a minor effect. Second, we study the running time and the quality reached when using bounding archiving methods to limit the size of the archive handled by PLS, and thus, the maximum size of the PLO-set found. We argue that there is a clear relationship between the running time of PLS and the difficulty of a problem instance.Comment: appears in Parallel Problem Solving from Nature - PPSN XIII, Ljubljana : Slovenia (2014
    • …
    corecore