60,457 research outputs found

    Assessing Compatibility of Direct Detection Data: Halo-Independent Global Likelihood Analyses

    Full text link
    We present two different halo-independent methods to assess the compatibility of several direct dark matter detection data sets for a given dark matter model using a global likelihood consisting of at least one extended likelihood and an arbitrary number of Gaussian or Poisson likelihoods. In the first method we find the global best fit halo function (we prove that it is a unique piecewise constant function with a number of down steps smaller than or equal to a maximum number that we compute) and construct a two-sided pointwise confidence band at any desired confidence level, which can then be compared with those derived from the extended likelihood alone to assess the joint compatibility of the data. In the second method we define a "constrained parameter goodness-of-fit" test statistic, whose pp-value we then use to define a "plausibility region" (e.g. where p≥10%p \geq 10\%). For any halo function not entirely contained within the plausibility region, the level of compatibility of the data is very low (e.g. p<10%p < 10 \%). We illustrate these methods by applying them to CDMS-II-Si and SuperCDMS data, assuming dark matter particles with elastic spin-independent isospin-conserving interactions or exothermic spin-independent isospin-violating interactions.Comment: 31 pages, 6 figures. V2: Modified several paragraphs to improve clarify. Modified Fig. 5 and added Fig. 6 to further illustrate methods of Section 5. Added proof of uniqueness of best fit halo function in Appendix

    Plausibility functions and exact frequentist inference

    Full text link
    In the frequentist program, inferential methods with exact control on error rates are a primary focus. The standard approach, however, is to rely on asymptotic approximations, which may not be suitable. This paper presents a general framework for the construction of exact frequentist procedures based on plausibility functions. It is shown that the plausibility function-based tests and confidence regions have the desired frequentist properties in finite samples---no large-sample justification needed. An extension of the proposed method is also given for problems involving nuisance parameters. Examples demonstrate that the plausibility function-based method is both exact and efficient in a wide variety of problems.Comment: 21 pages, 5 figures, 3 table

    Large-Scale Structure in the NIR-Selected MUNICS Survey

    Full text link
    The Munich Near-IR Cluster Survey (MUNICS) is a wide-area, medium-deep, photometric survey selected in the K' band. The project's main scientific aims are the identification of galaxy clusters up to redshifts of unity and the selection of a large sample of field early-type galaxies up to z < 1.5 for evolutionary studies. We created a Large Scale Structure catalog, using a new structure finding technique specialized for photometric datasets, that we developed on the basis of a friends-of-friends algorithm. We tested the plausibility of the resulting galaxy group and cluster catalog with the help of Color-Magnitude Diagrams (CMD), as well as a likelihood- and Voronoi-approach.Comment: 4 pages, to appear in "The Evolution of Galaxies III. From Simple Approaches to Self-Consistent Models", proceedings of the 3rd EuroConference on the evolution of galaxies, held in Kiel, Germany, July 16-20, 200

    Random sets and exact confidence regions

    Full text link
    An important problem in statistics is the construction of confidence regions for unknown parameters. In most cases, asymptotic distribution theory is used to construct confidence regions, so any coverage probability claims only hold approximately, for large samples. This paper describes a new approach, using random sets, which allows users to construct exact confidence regions without appeal to asymptotic theory. In particular, if the user-specified random set satisfies a certain validity property, confidence regions obtained by thresholding the induced data-dependent plausibility function are shown to have the desired coverage probability.Comment: 14 pages, 2 figure

    Analysis of the Web Graph Aggregated by Host and Pay-Level Domain

    Full text link
    In this paper the web is analyzed as a graph aggregated by host and pay-level domain (PLD). The web graph datasets, publicly available, have been released by the Common Crawl Foundation and are based on a web crawl performed during the period May-June-July 2017. The host graph has ∼\sim1.3 billion nodes and ∼\sim5.3 billion arcs. The PLD graph has ∼\sim91 million nodes and ∼\sim1.1 billion arcs. We study the distributions of degree and sizes of strongly/weakly connected components (SCC/WCC) focusing on power laws detection using statistical methods. The statistical plausibility of the power law model is compared with that of several alternative distributions. While there is no evidence of power law tails on host level, they emerge on PLD aggregation for indegree, SCC and WCC size distributions. Finally, we analyze distance-related features by studying the cumulative distributions of the shortest path lengths, and give an estimation of the diameters of the graphs
    • …
    corecore