505 research outputs found

    Higher Criticism for Large-Scale Inference, Especially for Rare and Weak Effects

    Full text link
    In modern high-throughput data analysis, researchers perform a large number of statistical tests, expecting to find perhaps a small fraction of significant effects against a predominantly null background. Higher Criticism (HC) was introduced to determine whether there are any nonzero effects; more recently, it was applied to feature selection, where it provides a method for selecting useful predictive features from a large body of potentially useful features, among which only a rare few will prove truly useful. In this article, we review the basics of HC in both the testing and feature selection settings. HC is a flexible idea, which adapts easily to new situations; we point out simple adaptions to clique detection and bivariate outlier detection. HC, although still early in its development, is seeing increasing interest from practitioners; we illustrate this with worked examples. HC is computationally effective, which gives it a nice leverage in the increasingly more relevant "Big Data" settings we see today. We also review the underlying theoretical "ideology" behind HC. The Rare/Weak (RW) model is a theoretical framework simultaneously controlling the size and prevalence of useful/significant items among the useless/null bulk. The RW model shows that HC has important advantages over better known procedures such as False Discovery Rate (FDR) control and Family-wise Error control (FwER), in particular, certain optimality properties. We discuss the rare/weak phase diagram, a way to visualize clearly the class of RW settings where the true signals are so rare or so weak that detection and feature selection are simply impossible, and a way to understand the known optimality properties of HC.Comment: Published at http://dx.doi.org/10.1214/14-STS506 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    The Objectivity of Subjective Bayesianism

    Get PDF
    Subjective Bayesianism is a major school of uncertain reasoning and statistical inference. It is often criticized for a lack of objectivity: (i) it opens the door to the influence of values and biases, (ii) evidence judgments can vary substantially between scientists, (iii) it is not suited for informing policy decisions. My paper rebuts these concerns by bridging the debates on scientific objectivity and statistical method. First, I show that the above concerns arise equally for standard frequentist inference. Second, I argue that the involved senses of objectivity are epistemically inert. Third, I show that Subjective Bayesianism promotes other, epistemically relevant senses of scientific objectivity---most notably by increasing the transparency of scientific reasoning

    The objectivity of Subjective Bayesianism

    Get PDF

    The Objectivity of Subjective Bayesianism

    Get PDF
    Subjective Bayesianism is a major school of uncertain reasoning and statistical inference. It is often criticized for a lack of objectivity: (i) it opens the door to the influence of values and biases, (ii) evidence judgments can vary substantially between scientists, (iii) it is not suited for informing policy decisions. My paper rebuts these concerns by bridging the debates on scientific objectivity and statistical method. First, I show that the above concerns arise equally for standard frequentist inference. Second, I argue that the involved senses of objectivity are epistemically inert. Third, I show that Subjective Bayesianism promotes other, epistemically relevant senses of scientific objectivity---most notably by increasing the transparency of scientific reasoning

    Modelling of spatial effects in count data

    Get PDF
    In this thesis, spatial structures in discrete valued count observations are modelled. More precisely, a global spatial autocorrelation parameter is estimated in the framework of a nonlinear count data regression model. For this purpose, cross-sectional and panel count data models are developed which incorporate spatial autocorrelation and allow for additional explanatory variables. The proposed models include the so-called "Spatial linear feedback model" for cross-sectional data as well as for panel data including fixed effects, which is estimated using maximum likelihood estimation. Additionally, two approaches for a distribution-free panel estimation using GMM are presented. The models are applied to a cross-sectional U.S. start-up firm births data set and a panel data set with crime counts from Pittsburgh.In dieser Arbeit werden räumliche Strukturen in diskreten Beobachtungen modelliert. Genauer gesagt, wird ein globaler räumlicher Autokorrelationsparameter in nichtlinearen Regressionsmodellen für Zähldaten geschätzt. Für diesen Zweck werden Querschnitts- und Panelmodelle für Zähldaten entwickelt, die eine räumliche Korrelation modellieren und die Berücksichtigung weiterer erklärender Variablen erlauben. Die vorgeschlagenen Modelle umfassen das sogenannte "Spatial linear feedback model," sowohl für Querschnittsdaten, als auch für Paneldaten mit fixen Effekten, das mit der Maximum-Likelihood-Methode geschätzt werden kann. Außerdem werden zwei Ansätze für eine verteilungsfreie Schätzung von Paneldaten mithilfe von GMM vorgestellt. Die Modelle werden auf einen Querschnittsdatensatz zu Unternehmensgründungen in den USA und auf einen Paneldatensatz zum Kriminalitätsaufkommen in Pittsburgh angewendet

    Probability and Measurement Uncertainty in Physics - a Bayesian Primer

    Get PDF
    Bayesian statistics is based on the subjective definition of probability as {\it ``degree of belief''} and on Bayes' theorem, the basic tool for assigning probabilities to hypotheses combining {\it a priori} judgements and experimental information. This was the original point of view of Bayes, Bernoulli, Gauss, Laplace, etc. and contrasts with later ``conventional'' (pseudo-)definitions of probabilities, which implicitly presuppose the concept of probability. These notes show that the Bayesian approach is the natural one for data analysis in the most general sense, and for assigning uncertainties to the results of physical measurements - while at the same time resolving philosophical aspects of the problems. The approach, although little known and usually misunderstood among the High Energy Physics community, has become the standard way of reasoning in several fields of research and has recently been adopted by the international metrology organizations in their recommendations for assessing measurement uncertainty. These notes describe a general model for treating uncertainties originating from random and systematic errors in a consistent way and include examples of applications of the model in High Energy Physics, e.g. ``confidence intervals'' in different contexts, upper/lower limits, treatment of ``systematic errors'', hypothesis tests and unfolding.Comment: 100 pages, 13 figures; 3.4 MB PostScript also available at http://zow00.desy.de:8000/zeus_papers/ZEUS_PAPERS/DESY-95-242.p
    • …
    corecore