83,007 research outputs found

    Dark matter annihilation and decay in dwarf spheroidal galaxies: The classical and ultrafaint dSphs

    Full text link
    Dwarf spheroidal (dSph) galaxies are prime targets for present and future gamma-ray telescopes hunting for indirect signals of particle dark matter. The interpretation of the data requires careful assessment of their dark matter content in order to derive robust constraints on candidate relic particles. Here, we use an optimised spherical Jeans analysis to reconstruct the `astrophysical factor' for both annihilating and decaying dark matter in 21 known dSphs. Improvements with respect to previous works are: (i) the use of more flexible luminosity and anisotropy profiles to minimise biases, (ii) the use of weak priors tailored on extensive sets of contamination-free mock data to improve the confidence intervals, (iii) systematic cross-checks of binned and unbinned analyses on mock and real data, and (iv) the use of mock data including stellar contamination to test the impact on reconstructed signals. Our analysis provides updated values for the dark matter content of 8 `classical' and 13 `ultrafaint' dSphs, with the quoted uncertainties directly linked to the sample size; the more flexible parametrisation we use results in changes compared to previous calculations. This translates into our ranking of potentially-brightest and most robust targets---viz., Ursa Minor, Draco, Sculptor---, and of the more promising, but uncertain targets---viz., Ursa Major 2, Coma---for annihilating dark matter. Our analysis of Segue 1 is extremely sensitive to whether we include or exclude a few marginal member stars, making this target one of the most uncertain. Our analysis illustrates challenges that will need to be addressed when inferring the dark matter content of new `ultrafaint' satellites that are beginning to be discovered in southern sky surveys.Comment: 19 pages, 14 figures, submitted to MNRAS. Supplementary material available on reques

    Integrating and Ranking Uncertain Scientific Data

    Get PDF
    Mediator-based data integration systems resolve exploratory queries by joining data elements across sources. In the presence of uncertainties, such multiple expansions can quickly lead to spurious connections and incorrect results. The BioRank project investigates formalisms for modeling uncertainty during scientific data integration and for ranking uncertain query results. Our motivating application is protein function prediction. In this paper we show that: (i) explicit modeling of uncertainties as probabilities increases our ability to predict less-known or previously unknown functions (though it does not improve predicting the well-known). This suggests that probabilistic uncertainty models offer utility for scientific knowledge discovery; (ii) small perturbations in the input probabilities tend to produce only minor changes in the quality of our result rankings. This suggests that our methods are robust against slight variations in the way uncertainties are transformed into probabilities; and (iii) several techniques allow us to evaluate our probabilistic rankings efficiently. This suggests that probabilistic query evaluation is not as hard for real-world problems as theory indicates

    The uncertain representation ranking framework for concept-based video retrieval

    Get PDF
    Concept based video retrieval often relies on imperfect and uncertain concept detectors. We propose a general ranking framework to define effective and robust ranking functions, through explicitly addressing detector uncertainty. It can cope with multiple concept-based representations per video segment and it allows the re-use of effective text retrieval functions which are defined on similar representations. The final ranking status value is a weighted combination of two components: the expected score of the possible scores, which represents the risk-neutral choice, and the scores’ standard deviation, which represents the risk or opportunity that the score for the actual representation is higher. The framework consistently improves the search performance in the shot retrieval task and the segment retrieval task over several baselines in five TRECVid collections and two collections which use simulated detectors of varying performance

    Data-driven satisficing measure and ranking

    Full text link
    We propose an computational framework for real-time risk assessment and prioritizing for random outcomes without prior information on probability distributions. The basic model is built based on satisficing measure (SM) which yields a single index for risk comparison. Since SM is a dual representation for a family of risk measures, we consider problems constrained by general convex risk measures and specifically by Conditional value-at-risk. Starting from offline optimization, we apply sample average approximation technique and argue the convergence rate and validation of optimal solutions. In online stochastic optimization case, we develop primal-dual stochastic approximation algorithms respectively for general risk constrained problems, and derive their regret bounds. For both offline and online cases, we illustrate the relationship between risk ranking accuracy with sample size (or iterations).Comment: 26 Pages, 6 Figure

    Probabilistic performance estimators for computational chemistry methods: Systematic Improvement Probability and Ranking Probability Matrix. I. Theory

    Full text link
    The comparison of benchmark error sets is an essential tool for the evaluation of theories in computational chemistry. The standard ranking of methods by their Mean Unsigned Error is unsatisfactory for several reasons linked to the non-normality of the error distributions and the presence of underlying trends. Complementary statistics have recently been proposed to palliate such deficiencies, such as quantiles of the absolute errors distribution or the mean prediction uncertainty. We introduce here a new score, the systematic improvement probability (SIP), based on the direct system-wise comparison of absolute errors. Independently of the chosen scoring rule, the uncertainty of the statistics due to the incompleteness of the benchmark data sets is also generally overlooked. However, this uncertainty is essential to appreciate the robustness of rankings. In the present article, we develop two indicators based on robust statistics to address this problem: P_{inv}, the inversion probability between two values of a statistic, and \mathbf{P}_{r}, the ranking probability matrix. We demonstrate also the essential contribution of the correlations between error sets in these scores comparisons
    corecore