5,588 research outputs found

    Taylor coefficients of non-holomorphic Jacobi forms and applications

    Full text link
    In this paper, we prove modularity results of Taylor coefficients of certain non-holomorphic Jacobi forms. It is well-known that Taylor coefficients of holomorphic Jacobi forms are quasimoular forms. However recently there has been a wide interest for Taylor coefficients of non-holomorphic Jacobi forms for example arising in combinatorics. In this paper, we show that such coefficients still inherit modular properties. We then work out the precise spaces in which these coefficients lie for two examples

    Improved bounds for Fourier coefficients of Siegel modular forms

    Full text link
    The goal of this paper is to improve existing bounds for Fourier coefficients of higher genus Siegel modular forms of small weight

    On the explicit construction of higher deformations of partition statistics

    Full text link
    The modularity of the partition generating function has many important consequences, for example asymptotics and congruences for p(n)p(n). In a series of papers the author and Ono \cite{BO1,BO2} connected the rank, a partition statistic introduced by Dyson, to weak Maass forms, a new class of functions which are related to modular forms and which were first considered in \cite{BF}. Here we do a further step towards understanding how weak Maass forms arise from interesting partition statistics by placing certain 2-marked Durfee symbols introduced by Andrews \cite{An1} into the framework of weak Maass forms. To do this we construct a new class of functions which we call quasiweak Maass forms because they have quasimodular forms as components. As an application we prove two conjectures of Andrews. It seems that this new class of functions will play an important role in better understanding weak Maass forms of higher weight themselves, and also their derivatives. As a side product we introduce a new method which enables us to prove transformation laws for generating functions over incomplete lattices.Comment: 29 pages, Duke J. accepted for publicatio

    Gamma rays from dark matter

    Full text link
    A leading hypothesis for the nature of the elusive dark matter are thermally produced, weakly interacting massive particles that arise in many theories beyond the standard model of particle physics. Their self-annihilation in astrophysical regions of high density provides a potential means of indirectly detecting dark matter through the annihilation products, which nicely complements direct and collider searches. Here, I review the case of gamma rays which are particularly promising in this respect: distinct and unambiguous spectral signatures would not only allow a clear discrimination from astrophysical backgrounds but also to extract important properties of the dark matter particles; powerful observational facilities like the Fermi Gamma-ray Space Telescope or upcoming large, ground-based Cherenkov telescope arrays will be able to probe a considerable part of the underlying, e.g. supersymmetric, parameter space. I conclude with a more detailed comparison of indirect and direct dark matter searches, showing that these two approaches are, indeed, complementary.Comment: 13 pages, 4 figures, World Science proceedings style. Based on an invited talk given at the ICATPP conference on cosmic rays for particle and astroparticle physics, Como, Italy, 7-8 Oct 201

    Why walking the dog takes time: Frechet distance has no strongly subquadratic algorithms unless SETH fails

    Full text link
    The Frechet distance is a well-studied and very popular measure of similarity of two curves. Many variants and extensions have been studied since Alt and Godau introduced this measure to computational geometry in 1991. Their original algorithm to compute the Frechet distance of two polygonal curves with n vertices has a runtime of O(n^2 log n). More than 20 years later, the state of the art algorithms for most variants still take time more than O(n^2 / log n), but no matching lower bounds are known, not even under reasonable complexity theoretic assumptions. To obtain a conditional lower bound, in this paper we assume the Strong Exponential Time Hypothesis or, more precisely, that there is no O*((2-delta)^N) algorithm for CNF-SAT for any delta > 0. Under this assumption we show that the Frechet distance cannot be computed in strongly subquadratic time, i.e., in time O(n^{2-delta}) for any delta > 0. This means that finding faster algorithms for the Frechet distance is as hard as finding faster CNF-SAT algorithms, and the existence of a strongly subquadratic algorithm can be considered unlikely. Our result holds for both the continuous and the discrete Frechet distance. We extend the main result in various directions. Based on the same assumption we (1) show non-existence of a strongly subquadratic 1.001-approximation, (2) present tight lower bounds in case the numbers of vertices of the two curves are imbalanced, and (3) examine realistic input assumptions (c-packed curves)

    Spectral cutoffs in indirect dark matter searches

    Full text link
    Indirect searches for dark matter annihilation or decay products in the cosmic-ray spectrum are plagued by the question of how to disentangle a dark matter signal from the omnipresent astrophysical background. One of the practically background-free smoking-gun signatures for dark matter would be the observation of a sharp cutoff or a pronounced bump in the gamma-ray energy spectrum. Such features are generically produced in many dark matter models by internal Bremsstrahlung, and they can be treated in a similar manner as the traditionally looked-for gamma-ray lines. Here, we discuss prospects for seeing such features with present and future Atmospheric Cherenkov Telescopes.Comment: 4 pages, 2 figures, 1 table; conference proceedings for TAUP 2011, Munich 5-9 Se

    Approximating the least hypervolume contributor: NP-hard in general, but fast in practice

    Get PDF
    The hypervolume indicator is an increasingly popular set measure to compare the quality of two Pareto sets. The basic ingredient of most hypervolume indicator based optimization algorithms is the calculation of the hypervolume contribution of single solutions regarding a Pareto set. We show that exact calculation of the hypervolume contribution is #P-hard while its approximation is NP-hard. The same holds for the calculation of the minimal contribution. We also prove that it is NP-hard to decide whether a solution has the least hypervolume contribution. Even deciding whether the contribution of a solution is at most (1+\eps) times the minimal contribution is NP-hard. This implies that it is neither possible to efficiently find the least contributing solution (unless P=NPP = NP) nor to approximate it (unless NP=BPPNP = BPP). Nevertheless, in the second part of the paper we present a fast approximation algorithm for this problem. We prove that for arbitrarily given \eps,\delta>0 it calculates a solution with contribution at most (1+\eps) times the minimal contribution with probability at least (1δ)(1-\delta). Though it cannot run in polynomial time for all instances, it performs extremely fast on various benchmark datasets. The algorithm solves very large problem instances which are intractable for exact algorithms (e.g., 10000 solutions in 100 dimensions) within a few seconds.Comment: 22 pages, to appear in Theoretical Computer Scienc

    The Bailey chain and mock theta functions

    Full text link
    Standard applications of the Bailey chain preserve mixed mock modularity but not mock modularity. After illustrating this with some examples, we show how to use a change of base in Bailey pairs due to Bressoud, Ismail and Stanton to explicitly construct families of q-hypergeometric multisums which are mock theta functions. We also prove identities involving some of these multisums and certain classical mock theta functions.Comment: 17 pages, to appear in Advances in Mathematic
    corecore