21 research outputs found

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    A characteristic function-based approach to approximate maximum likelihood estimation

    No full text
    <p>The choice of the summary statistics in approximate maximum likelihood is often a crucial issue. We develop a criterion for choosing the most effective summary statistic and then focus on the empirical characteristic function. In the iid setting, the approximating posterior distribution converges to the approximate distribution of the parameters conditional upon the empirical characteristic function. Simulation experiments suggest that the method is often preferable to numerical maximum likelihood. In a time-series framework, no optimality result can be proved, but the simulations indicate that the method is effective in small samples.</p

    Ground-level ozone: Evidence of increasing serial dependence in the extremes

    No full text
    As exposure to successive episodes of high ground-level ozone concentrations can result in larger changes in respiratory function than occasional exposure buffered by lengthy recovery periods, the analysis of extreme values in a series of ozone concentrations requires careful consideration of not only the levels of the extremes but also of any dependence appearing in the extremes of the series. Increased dependence represents increased health risks and it is thus important to detect any changes in the temporal dependence of extreme values. In this paper we establish the first test for a change point in the extremal dependence of a stationary time series. The test is flexible, easy to use and can be extended along several lines. The asymptotic distributions of our estimators and our test are established. A large simulation study verifies the good finite sample properties. The test allows us to show that there has been a significant increase in the serial dependence of the extreme levels of ground-level ozone concentrations in Bloomsbury (UK) in recent years

    A simple approach to the estimation of Tukey's gh distribution

    No full text
    The Tukey's gh distribution is widely used in situations where skewness and elongation are important features of the data. As the distribution is defined through a quantile transformation of the normal, the likelihood function cannot be written in closed form and exact maximum likelihood estimation is unfeasible. In this paper we exploit a novel approach based on a frequentist reinterpretation of Approximate Bayesian Computation for approximating the maximum likelihood estimates of the gh distribution. This method is appealing because it only requires the ability to sample the distribution. We discuss the choice of the input parameters by means of simulation experiments and provide evidence of superior performance in terms of Root-Mean-Square-Error with respect to the standard quantile estimator. Finally, we give an application to operational risk measurement

    Estimating large losses in insurance analytics and operational risk using the g-and-h distribution

    No full text
    In this paper, we study the estimation of parameters for g-and-h distributions. These distributions find applications in modeling highly skewed and fat-tailed data, like extreme losses in the banking and insurance sector. We first introduce two estimation methods: a numerical maximum likelihood technique, and an indirect inference approach with a bootstrap weighting scheme. In a realistic simulation study, we show that indirect inference is computationally more efficient and provides better estimates than the maximum likelihood method in the case of extreme features in the data. Empirical illustrations on insurance and operational losses illustrate these findings

    Estimating Value-at-Risk for the g-and-h distribution: an indirect inference approach

    No full text
    The g-and-h distribution is able to handle well the complex behavior of loss data and applied to operational losses suggests that indirect inference estimators of VaR outperform quantile-based estimators

    Testing liquidity: A statistical theory based on asset staleness

    No full text
    corecore