534 research outputs found

    Connecting model-based and model-free approaches to linear least squares regression

    Full text link
    In a regression setting with response vector yāˆˆRn\mathbf{y} \in \mathbb{R}^n and given regressor vectors x1,ā€¦,xpāˆˆRn\mathbf{x}_1,\ldots,\mathbf{x}_p \in \mathbb{R}^n, a typical question is to what extent y\mathbf{y} is related to these regressor vectors, specifically, how well can y\mathbf{y} be approximated by a linear combination of them. Classical methods for this question are based on statistical models for the conditional distribution of y\mathbf{y}, given the regressor vectors xj\mathbf{x}_j. Davies and Duembgen (2020) proposed a model-free approach in which all observation vectors y\mathbf{y} and xj\mathbf{x}_j are viewed as fixed, and the quality of the least squares fit of y\mathbf{y} is quantified by comparing it with the least squares fit resulting from pp independent white noise regressor vectors. The purpose of the present note is to explain in a general context why the model-based and model-free approach yield the same p-values, although the interpretation of the latter is different under the two paradigms

    Densities, spectral densities and modality

    Get PDF
    This paper considers the problem of specifying a simple approximating density function for a given data set (x_1,...,x_n). Simplicity is measured by the number of modes but several different definitions of approximation are introduced. The taut string method is used to control the numbers of modes and to produce candidate approximating densities. Refinements are introduced that improve the local adaptivity of the procedures and the method is extended to spectral densities.Comment: Published by the Institute of Mathematical Statistics (http://www.imstat.org) in the Annals of Statistics (http://www.imstat.org/aos/) at http://dx.doi.org/10.1214/00905360400000036

    Breakdown and Groups II

    Get PDF
    The notion of breakdown point was introduced by Hampel (1968, 1971) and has since played an important role in the theory and practice of robust statistics. In Davies and Gather (2004) it was argued that the success of the concept is connected to the existence of a group of transformations on the sample space and the linking of breakdown and equivariance. For example the highest breakdown point of any translation equivariant functional on the real line is 1/2 whereas without equivariance considerations the highest breakdown point is the trivial upper bound of 1. --

    Robust Statistics

    Get PDF
    The first example involves the real data given in Table 1 which are the results of an interlaboratory test. The boxplots are shown in Fig. 1 where the dotted line denotes the mean of the observations and the solid line the median. We note that only the results of the Laboratories 1 and 3 lie below the mean whereas all the remaining laboratories return larger values. In the case of the median, 7 of the readings coincide with the median, 24 readings are smaller and 24 are larger. A glance at Fig. 1 suggests that in the absence of further information the Laboratories 1 and 3 should be treated as outliers. This is the course which we recommend although the issues involved require careful thought. For the moment we note simply that the median is a robust statistic whereas the mean is not. --

    Confidence sets and non-parametric regression

    Get PDF
    --Nonparametric regression , shape regularization , confidence bounds

    Covariate Selection Based on a Model-free Approach to Linear Regression with Exact Probabilities

    Full text link
    In this paper we propose a completely new approach to the problem of covariate selection in linear regression. It is intuitive, very simple, fast and powerful, non-frequentist and non-Bayesian. It does not overfit, there is no shrinkage of the least squares coefficients, and it is model-free. A covariate or a set of covariates is included only if they are better in the sense of least squares than the same number of Gaussian covariates consisting of i.i.d. N(0,1)N(0,1) random variables. The degree to which they are better is measured by the P-value which is the probability that the Gaussian covariates are better. This probability is given in terms of the Beta distribution, it is exact and it holds for the data at hand whatever this may be. The idea extends to a stepwise procedure, the main contribution of the paper, where the best of the remaining covariates is only accepted if it is better than the best of the same number of random Gaussian covariates. Again this probability is given in terms of the Beta distribution, it is exact and it holds for the data at hand whatever this may be. We use a version with default parameters which works for a large collection of known data sets with up to a few hundred thousand covariates. The computing time for the largest data sets was about four seconds, and it outperforms all other selection procedures of which we are aware. The paper gives the results of simulations, applications to real data sets and theorems on the asymptotic behaviour under the standard linear model. An R-package {\it gausscov} is available. \Comment: 40 pages, 5 figure
    • ā€¦
    corecore