56,949 research outputs found

    The Prior Can Often Only Be Understood in the Context of the Likelihood

    Get PDF
    A key sticking point of Bayesian analysis is the choice of prior distribution, and there is a vast literature on potential defaults including uniform priors, Jeffreys’ priors, reference priors, maximum entropy priors, and weakly informative priors. These methods, however, often manifest a key conceptual tension in prior modeling: a model encoding true prior information should be chosen without reference to the model of the measurement process, but almost all common prior modeling techniques are implicitly motivated by a reference likelihood. In this paper we resolve this apparent paradox by placing the choice of prior into the context of the entire Bayesian analysis, from inference to prediction to model evaluation

    LHC and dark matter phenomenology of the NUGHM

    Get PDF
    We present a Bayesian analysis of the NUGHM, a supersymmetric scenario with non-universal gaugino masses and Higgs masses, including all the relevant experimental observables and dark matter constraints. The main merit of the NUGHM is that it essentially includes all the possibilities for dark matter (DM) candidates within the MSSM, since the neutralino and chargino spectrum -and composition- are as free as they can be in the general MSSM. We identify the most probable regions in the NUHGM parameter space, and study the associated phenomenology at the LHC and the prospects for DM direct detection. Requiring that the neutralino makes all of the DM in the Universe, we identify two preferred regions around mχ10=1 TeV,  3 TeVm_{\chi_1^0}= 1\ {\rm TeV},\; 3\ {\rm TeV}, which correspond to the (almost) pure Higgsino and wino case. There exist other marginal regions (e.g. Higgs-funnel), but with much less statistical weight. The prospects for detection at the LHC in this case are quite pessimistic, but future direct detection experiments like LUX and XENON1T, will be able to probe this scenario. In contrast, when allowing other DM components, the prospects for detection at the LHC become more encouraging -- the most promising signals being, beside the production of gluinos and squarks, the production of the heavier chargino and neutralino states, which lead to WZ and same-sign WW final states -- and direct detection remains a complementary, and even more powerful, way to probe the scenario.Comment: The Sommerfeld enhancement has been included in the computation of the relic density and in the discussion of indirect-detection limits. Some references have been adde

    Entropy and inference, revisited

    Full text link
    We study properties of popular near-uniform (Dirichlet) priors for learning undersampled probability distributions on discrete nonmetric spaces and show that they lead to disastrous results. However, an Occam-style phase space argument expands the priors into their infinite mixture and resolves most of the observed problems. This leads to a surprisingly good estimator of entropies of discrete distributions.Comment: LaTex2e, 9 pages, 5 figures; references added, minor revisions introduced, formatting errors correcte

    Bayesian games with a continuum of states

    Get PDF
    We show that every Bayesian game with purely atomic types has a measurable Bayesian equilibrium when the common knowl- edge relation is smooth. Conversely, for any common knowledge rela- tion that is not smooth, there exists a type space that yields this common knowledge relation and payoffs such that the resulting Bayesian game will not have any Bayesian equilibrium. We show that our smoothness condition also rules out two paradoxes involving Bayesian games with a continuum of types: the impossibility of having a common prior on components when a common prior over the entire state space exists, and the possibility of interim betting/trade even when no such trade can be supported ex ante

    Learning without Recall: A Case for Log-Linear Learning

    Get PDF
    We analyze a model of learning and belief formation in networks in which agents follow Bayes rule yet they do not recall their history of past observations and cannot reason about how other agents' beliefs are formed. They do so by making rational inferences about their observations which include a sequence of independent and identically distributed private signals as well as the beliefs of their neighboring agents at each time. Fully rational agents would successively apply Bayes rule to the entire history of observations. This leads to forebodingly complex inferences due to lack of knowledge about the global network structure that causes those observations. To address these complexities, we consider a Learning without Recall model, which in addition to providing a tractable framework for analyzing the behavior of rational agents in social networks, can also provide a behavioral foundation for the variety of non-Bayesian update rules in the literature. We present the implications of various choices for time-varying priors of such agents and how this choice affects learning and its rate.Comment: in 5th IFAC Workshop on Distributed Estimation and Control in Networked Systems, (NecSys 2015

    Bayes and empirical Bayes: do they merge?

    Full text link
    Bayesian inference is attractive for its coherence and good frequentist properties. However, it is a common experience that eliciting a honest prior may be difficult and, in practice, people often take an {\em empirical Bayes} approach, plugging empirical estimates of the prior hyperparameters into the posterior distribution. Even if not rigorously justified, the underlying idea is that, when the sample size is large, empirical Bayes leads to "similar" inferential answers. Yet, precise mathematical results seem to be missing. In this work, we give a more rigorous justification in terms of merging of Bayes and empirical Bayes posterior distributions. We consider two notions of merging: Bayesian weak merging and frequentist merging in total variation. Since weak merging is related to consistency, we provide sufficient conditions for consistency of empirical Bayes posteriors. Also, we show that, under regularity conditions, the empirical Bayes procedure asymptotically selects the value of the hyperparameter for which the prior mostly favors the "truth". Examples include empirical Bayes density estimation with Dirichlet process mixtures.Comment: 27 page

    Supersymmetry Without Prejudice

    Full text link
    We begin an exploration of the physics associated with the general CP-conserving MSSM with Minimal Flavor Violation, the pMSSM. The 19 soft SUSY breaking parameters in this scenario are chosen so as to satisfy all existing experimental and theoretical constraints assuming that the WIMP is a conventional thermal relic, ie, the lightest neutralino. We scan this parameter space twice using both flat and log priors for the soft SUSY breaking mass parameters and compare the results which yield similar conclusions. Detailed constraints from both LEP and the Tevatron searches play a particularly important role in obtaining our final model samples. We find that the pMSSM leads to a much broader set of predictions for the properties of the SUSY partners as well as for a number of experimental observables than those found in any of the conventional SUSY breaking scenarios such as mSUGRA. This set of models can easily lead to atypical expectations for SUSY signals at the LHC.Comment: 61 pages, 24 figs. Refs., figs, and text added, typos fixed; This version has reduced/bitmapped figs. For a version with better figs please go to http://www.slac.stanford.edu/~rizz

    MSSM Forecast for the LHC

    Get PDF
    We perform a forecast of the MSSM with universal soft terms (CMSSM) for the LHC, based on an improved Bayesian analysis. We do not incorporate ad hoc measures of the fine-tuning to penalize unnatural possibilities: such penalization arises from the Bayesian analysis itself when the experimental value of MZM_Z is considered. This allows to scan the whole parameter space, allowing arbitrarily large soft terms. Still the low-energy region is statistically favoured (even before including dark matter or g-2 constraints). Contrary to other studies, the results are almost unaffected by changing the upper limits taken for the soft terms. The results are also remarkable stable when using flat or logarithmic priors, a fact that arises from the larger statistical weight of the low-energy region in both cases. Then we incorporate all the important experimental constrains to the analysis, obtaining a map of the probability density of the MSSM parameter space, i.e. the forecast of the MSSM. Since not all the experimental information is equally robust, we perform separate analyses depending on the group of observables used. When only the most robust ones are used, the favoured region of the parameter space contains a significant portion outside the LHC reach. This effect gets reinforced if the Higgs mass is not close to its present experimental limit and persits when dark matter constraints are included. Only when the g-2 constraint (based on e+e−e^+e^- data) is considered, the preferred region (for μ>0\mu>0) is well inside the LHC scope. We also perform a Bayesian comparison of the positive- and negative-μ\mu possibilities.Comment: 42 pages: added figures and reference
    • …
    corecore