1,368 research outputs found

    Spectral cutoffs in indirect dark matter searches

    Full text link
    Indirect searches for dark matter annihilation or decay products in the cosmic-ray spectrum are plagued by the question of how to disentangle a dark matter signal from the omnipresent astrophysical background. One of the practically background-free smoking-gun signatures for dark matter would be the observation of a sharp cutoff or a pronounced bump in the gamma-ray energy spectrum. Such features are generically produced in many dark matter models by internal Bremsstrahlung, and they can be treated in a similar manner as the traditionally looked-for gamma-ray lines. Here, we discuss prospects for seeing such features with present and future Atmospheric Cherenkov Telescopes.Comment: 4 pages, 2 figures, 1 table; conference proceedings for TAUP 2011, Munich 5-9 Se

    Intermediate Mass Black Holes and Nearby Dark Matter Point Sources: A Critical Reassessment

    Full text link
    The proposal of a galactic population of intermediate mass black holes (IMBHs), forming dark matter (DM) ``mini-spikes'' around them, has received considerable attention in recent years. In fact, leading in some scenarios to large annihilation fluxes in gamma rays, neutrinos and charged cosmic rays, these objects are sometimes quoted as one of the most promising targets for indirect DM searches. In this letter, we apply a detailed statistical analysis to point out that the existing EGRET data already place very stringent limits on those scenarios, making it rather unlikely that any of these objects will be observed with, e.g., the Fermi/GLAST satellite or upcoming Air Cherenkov telescopes. We also demonstrate that prospects for observing signals in neutrinos or charged cosmic rays seem even worse. Finally, we address the question of whether the excess in the cosmic ray positron/electron flux recently reported by PAMELA/ATIC could be due to a nearby DM point source like a DM clump or mini-spike; gamma-ray bounds, as well as the recently released Fermi cosmic ray electron and positron data, again exclude such a possibility for conventional DM candidates, and strongly constrain it for DM purely annihilating into light leptons.Comment: 4 pages revtex4, 4 figures. Improved analysis and discussion, added constraints from Fermi data, corrected figures and updated reference

    Constraints on small-scale cosmological perturbations from gamma-ray searches for dark matter

    Full text link
    Events like inflation or phase transitions can produce large density perturbations on very small scales in the early Universe. Probes of small scales are therefore useful for e.g. discriminating between inflationary models. Until recently, the only such constraint came from non-observation of primordial black holes (PBHs), associated with the largest perturbations. Moderate-amplitude perturbations can collapse shortly after matter-radiation equality to form ultracompact minihalos (UCMHs) of dark matter, in far greater abundance than PBHs. If dark matter self-annihilates, UCMHs become excellent targets for indirect detection. Here we discuss the gamma-ray fluxes expected from UCMHs, the prospects of observing them with gamma-ray telescopes, and limits upon the primordial power spectrum derived from their non-observation by the Fermi Large Area Space Telescope.Comment: 4 pages, 3 figures. To appear in J Phys Conf Series (Proceedings of TAUP 2011, Munich

    Inflation in Gauged 6D Supergravity

    Full text link
    In this note we demonstrate that chaotic inflation can naturally be realized in the context of an anomaly free minimal gauged supergravity in D=6 which has recently been the focus of some attention. This particular model has a unique maximally symmetric ground state solution, R3,1×S2R^{3,1} \times S^2 which leaves half of the six-dimensional supersymmetries unbroken. In this model, the inflaton field ϕ\phi originates from the complex scalar fields in the D=6 scalar hypermultiplet. The mass and the self couplings of the scalar field are dictated by the D=6 Lagrangian. The scalar potential has an absolute munimum at ϕ=0\phi = 0 with no undetermined moduli fields. Imposing a mild bound on the radius of S2S^2 enables us to obtain chaotic inflation. The low eenrgy equations of motion are shown to be consistent for the range of scalar field values relevant for inflation.Comment: one reference adde

    A Dichotomy for Regular Expression Membership Testing

    No full text
    We study regular expression membership testing: Given a regular expression of size mm and a string of size nn, decide whether the string is in the language described by the regular expression. Its classic O(nm)O(nm) algorithm is one of the big success stories of the 70s, which allowed pattern matching to develop into the standard tool that it is today. Many special cases of pattern matching have been studied that can be solved faster than in quadratic time. However, a systematic study of tractable cases was made possible only recently, with the first conditional lower bounds reported by Backurs and Indyk [FOCS'16]. Restricted to any "type" of homogeneous regular expressions of depth 2 or 3, they either presented a near-linear time algorithm or a quadratic conditional lower bound, with one exception known as the Word Break problem. In this paper we complete their work as follows: 1) We present two almost-linear time algorithms that generalize all known almost-linear time algorithms for special cases of regular expression membership testing. 2) We classify all types, except for the Word Break problem, into almost-linear time or quadratic time assuming the Strong Exponential Time Hypothesis. This extends the classification from depth 2 and 3 to any constant depth. 3) For the Word Break problem we give an improved O~(nm1/3+m)\tilde{O}(n m^{1/3} + m) algorithm. Surprisingly, we also prove a matching conditional lower bound for combinatorial algorithms. This establishes Word Break as the only intermediate problem. In total, we prove matching upper and lower bounds for any type of bounded-depth homogeneous regular expressions, which yields a full dichotomy for regular expression membership testing

    Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-And-Solve

    No full text
    Can we analyze data without decompressing it? As our data keeps growing, understanding the time complexity of problems on compressed inputs, rather than in convenient uncompressed forms, becomes more and more relevant. Suppose we are given a compression of size nn of data that originally has size NN, and we want to solve a problem with time complexity T()T(\cdot). The naive strategy of "decompress-and-solve" gives time T(N)T(N), whereas "the gold standard" is time T(n)T(n): to analyze the compression as efficiently as if the original data was small. We restrict our attention to data in the form of a string (text, files, genomes, etc.) and study the most ubiquitous tasks. While the challenge might seem to depend heavily on the specific compression scheme, most methods of practical relevance (Lempel-Ziv-family, dictionary methods, and others) can be unified under the elegant notion of Grammar Compressions. A vast literature, across many disciplines, established this as an influential notion for Algorithm design. We introduce a framework for proving (conditional) lower bounds in this field, allowing us to assess whether decompress-and-solve can be improved, and by how much. Our main results are: - The O(nNlogN/n)O(nN\sqrt{\log{N/n}}) bound for LCS and the O(min{NlogN,nM})O(\min\{N \log N, nM\}) bound for Pattern Matching with Wildcards are optimal up to No(1)N^{o(1)} factors, under the Strong Exponential Time Hypothesis. (Here, MM denotes the uncompressed length of the compressed pattern.) - Decompress-and-solve is essentially optimal for Context-Free Grammar Parsing and RNA Folding, under the kk-Clique conjecture. - We give an algorithm showing that decompress-and-solve is not optimal for Disjointness

    Discrete {F}r\'{e}chet Distance under Translation: {C}onditional Hardness and an Improved Algorithm

    Get PDF

    A Dichotomy for Regular Expression Membership Testing

    Get PDF
    We study regular expression membership testing: Given a regular expression of size mm and a string of size nn, decide whether the string is in the language described by the regular expression. Its classic O(nm)O(nm) algorithm is one of the big success stories of the 70s, which allowed pattern matching to develop into the standard tool that it is today. Many special cases of pattern matching have been studied that can be solved faster than in quadratic time. However, a systematic study of tractable cases was made possible only recently, with the first conditional lower bounds reported by Backurs and Indyk [FOCS'16]. Restricted to any "type" of homogeneous regular expressions of depth 2 or 3, they either presented a near-linear time algorithm or a quadratic conditional lower bound, with one exception known as the Word Break problem. In this paper we complete their work as follows: 1) We present two almost-linear time algorithms that generalize all known almost-linear time algorithms for special cases of regular expression membership testing. 2) We classify all types, except for the Word Break problem, into almost-linear time or quadratic time assuming the Strong Exponential Time Hypothesis. This extends the classification from depth 2 and 3 to any constant depth. 3) For the Word Break problem we give an improved O~(nm1/3+m)\tilde{O}(n m^{1/3} + m) algorithm. Surprisingly, we also prove a matching conditional lower bound for combinatorial algorithms. This establishes Word Break as the only intermediate problem. In total, we prove matching upper and lower bounds for any type of bounded-depth homogeneous regular expressions, which yields a full dichotomy for regular expression membership testing
    corecore