594 research outputs found

    A survey of robust statistics

    Get PDF
    We argue that robust statistics has multiple goals, which are not always aligned. Robust thinking grew out of data analysis and the realisation that empirical evidence is at times supported merely by one or a few observations. The paper examines the outgrowth from this criticism of the statistical method over the last few decade

    Least-absolute-deviations fits for generalized linear models

    Get PDF
    SUMMARY The fitting by quasi-likelihoods is based on Euclidean distance and thereby related to the least-squares norm. This paper examines the consequences of replacing the L2-norm by the L1-norm in the derivation of quasi-likelihoods. Since the least-absolute-deviations centre of a distribution is its median rather than its mean, the natural models for the L1-fitting involve medians. However, even if we model the mean response rather than the median response, an L1-type criterion is applicable and leads to alternatives for maximum likelihood fit

    Optimality in multiple comparison procedures

    Full text link
    When many (m) null hypotheses are tested with a single dataset, the control of the number of false rejections is often the principal consideration. Two popular controlling rates are the probability of making at least one false discovery (FWER) and the expected fraction of false discoveries among all rejections (FDR). Scaled multiple comparison error rates form a new family that bridges the gap between these two extremes. For example, the Scaled Expected Value (SEV) limits the number of false positives relative to an arbitrary increasing function of the number of rejections, that is, E(FP/s(R)). We discuss the problem of how to choose in practice which procedure to use, with elements of an optimality theory, by considering the number of false rejections FP separately from the number of correct rejections TP. Using this framework we will show how to choose an element in the new family mentioned above.Comment: arXiv admin note: text overlap with arXiv:1112.451

    Conditionally Optimal Weights of Evidence

    Get PDF
    Abstract : A weight of evidence is a calibrated statistic whose values in [0, 1] indicate the degree of agreement between the data and either of two hypothesis, one being treated as the null (H 0) and the other as the alternative (H 1). A value of zero means perfect agreement with the null, whereas a value of one means perfect agreement with the alternative. The optimality we consider is minimal mean squared error (MSE) under the alternative while keeping the MSE under the null below a fixed bound. This paper studies such statistics from a conditional point of view, in particular for location and scale model

    The Choice of Polar Stationary Phases for Gas-Liquid Chromatography by Statistical Analysis of Retention Data

    Get PDF
    We have studied the retention indices of 127 volatile substances on a C78-paraffin and on its seven nearly isochor and isomorphous polar derivatives over a temperature range of 90-210°C. The retention index of a substance on the C78 paraffin has been considered as the standard. The additional retention on the polar derivative was given by the difference of its retention index on the polar solvent and on the C78-paraffin. Statistical analyses of the additional retention have shown that with respect to retention, the seven polar solvents can be classified into three groups: Type I: TTF (tetrakistrifluoromethyl), MTF (monotrifluoromethyl), Type II: PCN (primary cyano), PSH (primary thiol) and Type III: TMO (tetramethoxy), SOH (secondary alcohol) and POH (primary alcohol). It is shown that these three types are best represented by the solvents TTF, PCN and TMO. PSH (primary thiol) is aligned with PCN at temperatures up to about 150°C, but is similar to TMO at 210°

    Least-Absolute-Deviations Fits for Generalized Linear Models

    Get PDF
    • …
    corecore