650 research outputs found

    Quantile-Quantile Methodology -- Detailed Results

    Full text link
    The linear quantile-quantile relationship provides an easy-to-implement yet effective tool for transformation to and testing for normality. Its good performance is verified in this report

    Variable Selection for 1D Regression Models

    Get PDF
    Variable selection, the search for j relevant predictor variables from a group of p candidates, is a standard problem in regression analysis. The class of 1D regression models is a broad class that includes generalized linear models. We show that existing variable selection algorithms, originally meant for multiple linear regression and based on ordinary least squares and Mallows’ Cp, can also be used for 1D models. Graphical aids for variable selection are also provided

    Robust Regression with High Coverage

    Get PDF
    An important parameter for several high breakdown regression algorithm estimators is the number of cases given weight one, called the coverage of the estimator. Increasing the coverage is believed to result in a more stable estimator, but the price paid for this stability is greatly decreased resistance to outliers. A simple modification of the algorithm can greatly increase the coverage and hence its statistical performance while maintaining high outlier resistance

    Inconsistency of Resampling Algorithms for High Breakdown Regression Estimators and a New Algorithm

    Get PDF
    Since high breakdown estimators are impractical to compute exactly in large samples, approximate algorithms are used. The algorithm generally produces an estimator with a lower consistency rate and breakdown value than the exact theoretical estimator. This discrepancy grows with the sample size, with the implication that huge computations are needed for good approximations in large high-dimensioned samples The workhorse for HBE has been the ‘elemental set’, or ‘basic resampling’ algorithm. This turns out to be completely ineffective in high dimensions with high levels of contamination. However, enriching it with a “concentration” step turns it into a method that is able to handle even high levels of contamination, provided the regression outliers are located on random cases. It remains ineffective if the regression outliers are concentrated on high leverage cases. We focus on the multiple regression problem, but several of the broad conclusions – notably those of the inadequacy of fixed numbers of elemental starts – are relevant to multivariate location and dispersion estimation as well. We introduce a new algorithm – the “X-cluster” method – for large high-dimensional multiple regression data sets that are beyond the reach of standard resampling methods. This algorithm departs sharply from current HBE algorithms in that, even at a constant percentage of contamination, it is more effective the larger the sample, making a compelling case for using it in the large-sample situations that current methods serve poorly. A multi-pronged analysis, using both traditional OLS and L1 methods along with newer resistant techniques, will often detect departures from the multiple regression model that can not be detected by any single estimator

    Behavior of Elemental Sets in Regression

    Get PDF
    Elemental sets are used to produce trial estimates b of the regression coefficients β. If b0 minimizes ||b-β|| among all elemental fits b, then ||b0-β||=OP(n-1), regardless of the criterion used. For any estimator bA, ||bA-β|| is at best OP(n-1/2). Hence restricting fits to elemental introduces asymptotically negligible error

    Diagnosis of Some Model Deficiencies Using Recursive Residuals

    Get PDF
    1 online resource (PDF, 48 pages

    Formal Inference-based Recursive Modeling

    Get PDF
    1 online resource (PDF, 46 pages

    Macrecur - Diagnostics for the Use with Multiple Regressive Recursive Residuals

    Get PDF
    1 online resource (PDF, 16 pages
    • …
    corecore