1,337 research outputs found

    Aspects of production and kinetic decoupling of non-thermal dark matter

    Full text link
    We reconsider non-thermal production of WIMP dark matter in a systematic way and using a numerical code for accurate computations of dark matter relic densities. Candidates with large pair annihilation rates are favored, suggesting a connection with the anomalies in the lepton cosmic-ray flux detected by Pamela and Fermi. Focussing on supersymmetric models we will consider the impact of non-thermal production on the preferred mass scale for dark matter neutralinos. We have also developed a new formalism to solve the Boltzmann's equation for a system of coannihilating species without assuming kinetic equilibrium and applied it to the case of pure Winos.Comment: Proceedings for the conference TAUP 201

    The galactic antiproton spectrum at high energies: background expectation vs. exotic contributions

    Get PDF
    A new generation of upcoming space-based experiments will soon start to probe the spectrum of cosmic ray antiparticles with an unprecedented accuracy and, in particular, will open up a window to energies much higher than those accessible so far. It is thus timely to carefully investigate the expected antiparticle fluxes at high energies. Here, we perform such an analysis for the case of antiprotons. We consider both standard sources as the collision of other cosmic rays with interstellar matter, as well as exotic contributions from dark matter annihilations in the galactic halo. Up to energies well above 100 GeV, we find that the background flux in antiprotons is almost uniquely determined by the existing low-energy data on various cosmic ray species; for even higher energies, however, the uncertainties in the parameters of the underlying propagation model eventually become significant. We also show that if the dark matter is composed of particles with masses at the TeV scale, which is naturally expected in extra-dimensional models as well as in certain parameter regions of supersymmetric models, the annihilation flux can become comparable to - or even dominate - the antiproton background at the high energies considered here.Comment: 17 pages revtex4, 7 figures; minor changes (to match the published version

    Inflation in Gauged 6D Supergravity

    Full text link
    In this note we demonstrate that chaotic inflation can naturally be realized in the context of an anomaly free minimal gauged supergravity in D=6 which has recently been the focus of some attention. This particular model has a unique maximally symmetric ground state solution, R3,1×S2R^{3,1} \times S^2 which leaves half of the six-dimensional supersymmetries unbroken. In this model, the inflaton field ϕ\phi originates from the complex scalar fields in the D=6 scalar hypermultiplet. The mass and the self couplings of the scalar field are dictated by the D=6 Lagrangian. The scalar potential has an absolute munimum at ϕ=0\phi = 0 with no undetermined moduli fields. Imposing a mild bound on the radius of S2S^2 enables us to obtain chaotic inflation. The low eenrgy equations of motion are shown to be consistent for the range of scalar field values relevant for inflation.Comment: one reference adde

    Multi-objective optimisation of machine tool error mapping using automated planning

    Get PDF
    Error mapping of machine tools is a multi-measurement task that is planned based on expert knowledge. There are no intelligent tools aiding the production of optimal measurement plans. In previous work, a method of intelligently constructing measurement plans demonstrated that it is feasible to optimise the plans either to reduce machine tool downtime or the estimated uncertainty of measurement due to the plan schedule. However, production scheduling and a continuously changing environment can impose conflicting constraints on downtime and the uncertainty of measurement. In this paper, the use of the produced measurement model to minimise machine tool downtime, the uncertainty of measurement and the arithmetic mean of both is investigated and discussed through the use of twelve different error mapping instances. The multi-objective search plans on average have a 3% reduction in the time metric when compared to the downtime of the uncertainty optimised plan and a 23% improvement in estimated uncertainty of measurement metric when compared to the uncertainty of the temporally optimised plan. Further experiments on a High Performance Computing (HPC) architecture demonstrated that there is on average a 3% improvement in optimality when compared with the experiments performed on the PC architecture. This demonstrates that even though a 4% improvement is beneficial, in most applications a standard PC architecture will result in valid error mapping plan

    Maximum Volume Subset Selection for Anchored Boxes

    Get PDF
    Let BB be a set of nn axis-parallel boxes in Rd\mathbb{R}^d such that each box has a corner at the origin and the other corner in the positive quadrant of Rd\mathbb{R}^d, and let kk be a positive integer. We study the problem of selecting kk boxes in BB that maximize the volume of the union of the selected boxes. This research is motivated by applications in skyline queries for databases and in multicriteria optimization, where the problem is known as the hypervolume subset selection problem. It is known that the problem can be solved in polynomial time in the plane, while the best known running time in any dimension d3d \ge 3 is Ω((nk))\Omega\big(\binom{n}{k}\big). We show that: - The problem is NP-hard already in 3 dimensions. - In 3 dimensions, we break the bound Ω((nk))\Omega\big(\binom{n}{k}\big), by providing an nO(k)n^{O(\sqrt{k})} algorithm. - For any constant dimension dd, we present an efficient polynomial-time approximation scheme

    New Gamma-Ray Contributions to Supersymmetric Dark Matter Annihilation

    Full text link
    We compute the electromagnetic radiative corrections to all leading annihilation processes which may occur in the Galactic dark matter halo, for dark matter in the framework of supersymmetric extensions of the Standard Model (MSSM and mSUGRA), and present the results of scans over the parameter space that is consistent with present observational bounds on the dark matter density of the Universe. Although these processes have previously been considered in some special cases by various authors, our new general analysis shows novel interesting results with large corrections that may be of importance, e.g., for searches at the soon to be launched GLAST gamma-ray space telescope. In particular, it is pointed out that regions of parameter space where there is a near degeneracy between the dark matter neutralino and the tau sleptons, radiative corrections may boost the gamma-ray yield by up to three or four orders of magnitude, even for neutralino masses considerably below the TeV scale, and will enhance the very characteristic signature of dark matter annihilations, namely a sharp step at the mass of the dark matter particle. Since this is a particularly interesting region for more constrained mSUGRA models of supersymmetry, we use an extensive scan over this parameter space to verify the significance of our findings. We also re-visit the direct annihilation of neutralinos into photons and point out that, for a considerable part of the parameter space, internal bremsstrahlung is more important for indirect dark matter searches than line signals.Comment: Replaced Fig. 2c which by mistake displayed the same spectrum as Fig. 2d; the radiative corrections reported here are now implemented in DarkSUSY which is available at http://www.physto.se/~edsjo/darksusy

    Hardness of Approximation in {P} via Short Cycle Removal: {C}ycle Detection, Distance Oracles, and Beyond

    Get PDF

    Probabilistic Analysis of Optimization Problems on Generalized Random Shortest Path Metrics

    Get PDF
    Simple heuristics often show a remarkable performance in practice for optimization problems. Worst-case analysis often falls short of explaining this performance. Because of this, "beyond worst-case analysis" of algorithms has recently gained a lot of attention, including probabilistic analysis of algorithms. The instances of many optimization problems are essentially a discrete metric space. Probabilistic analysis for such metric optimization problems has nevertheless mostly been conducted on instances drawn from Euclidean space, which provides a structure that is usually heavily exploited in the analysis. However, most instances from practice are not Euclidean. Little work has been done on metric instances drawn from other, more realistic, distributions. Some initial results have been obtained by Bringmann et al. (Algorithmica, 2013), who have used random shortest path metrics on complete graphs to analyze heuristics. The goal of this paper is to generalize these findings to non-complete graphs, especially Erd\H{o}s-R\'enyi random graphs. A random shortest path metric is constructed by drawing independent random edge weights for each edge in the graph and setting the distance between every pair of vertices to the length of a shortest path between them with respect to the drawn weights. For such instances, we prove that the greedy heuristic for the minimum distance maximum matching problem, the nearest neighbor and insertion heuristics for the traveling salesman problem, and a trivial heuristic for the kk-median problem all achieve a constant expected approximation ratio. Additionally, we show a polynomial upper bound for the expected number of iterations of the 2-opt heuristic for the traveling salesman problem.Comment: An extended abstract appeared in the proceedings of WALCOM 201

    Neutrino signals from electroweak bremsstrahlung in solar WIMP annihilation

    Full text link
    Bremsstrahlung of WW and ZZ gauge bosons, or photons, can be an important dark matter annihilation channel. In many popular models in which the annihilation to a pair of light fermions is helicity suppressed, these bremsstrahlung processes can lift the suppression and thus become the dominant annihilation channels. The resulting dark matter annihilation products contain a large, energetic, neutrino component. We consider solar WIMP annihilation in the case where electroweak bremsstrahlung dominates, and calculate the resulting neutrino spectra. The flux consists of primary neutrinos produced in processes such as χχνˉνZ\chi\chi\rightarrow \bar{\nu}\nu Z and χχνˉW\chi\chi\rightarrow \bar{\nu}\ell W, and secondary neutrinos produced via the decays of gauge bosons and charged leptons. After dealing with the neutrino propagation and flavour evolution in the Sun, we consider the prospects for detection in neutrino experiments on Earth. By comparing our signal with that for annihilation to W+WW^+W^-, we show that the detection prospects for the bremsstrahlung annihilation channel are favourable.Comment: 18 pages, 5 figures. Discussion expanded; matches published versio

    Codon adaptation-based control of protein expression in C. elegans.

    Get PDF
    We present a method to control protein levels under native genetic regulation in Caenorhabditis elegans by using synthetic genes with adapted codons. We found that the force acting on the spindle in C. elegans embryos was related to the amount of the G-protein regulator GPR-1/2. Codon-adapted versions of any C. elegans gene can be designed using our web tool, C. elegans codon adapter
    corecore