25,605 research outputs found

    Estimating population means in covariance stationary process

    Get PDF
    In simple random sampling, the basic assumption at the stage of estimating the standard error of the sample mean and constructing the corresponding confidence interval for the population mean is that the observations in the sample must be independent. In a number of cases, however, the validity of this assumption is under question, and as examples we mention the cases of generating dependent quantities in Jackknife estimation, or the evolution through time of a social quantitative indicator in longitudinal studies. For the case of covariance stationary processes, in this paper we explore the consequences of estimating the standard error of the sample mean using however the classical way based on the independence assumption. As criteria we use the degree of bias in estimating the standard error, and the actual confidence level attained by the confidence interval, that is, the actual probability the interval to contain the true mean. These two criteria are computed analytically under different sample sizes in the stationary ARMA(1,1) process, which can generate different forms of autocorrelation structure between observations at different lags.Jackknife estimation; ARMA; Longitudinal data; Actual confidence level

    Non-Nested Models and the Likelihood Ratio Statistic: A Comparison of Simulation and Bootstrap Based Tests

    Get PDF
    We consider an alternative use of simulation in the context of using the Likelihood-Ratio statistic to test non-nested models. To date simulation has been used to estimate the Kullback-Leibler measure of closeness between two densities, which in turn 'mean adjusts' the Likelihood-Ratio statistic. Given that this adjustment is still based upon asymptotic arguments, an alternative procedure is to utilise bootstrap procedures to construct the empirical density. To our knowledge this study represents the first comparison of the properties of bootstrap and simulation-based tests applied to non-nested tests. More specifically, the design of experiments allows us to comment on the relative performance of these two testing frameworks across models with varying degrees of nonlinearity. In this respect although the primary focus of the paper is upon the relative evaluation of simulation and bootstrap-based nonnested procedures in testing across a class of nonlinear threshold models, the inclusion of a similar analysis of the more standard linear/log-linear models provides a point of comparison.Non-nested tests, Simulation-based inference, Bootstrap tests, Nonlinear threshold models

    Confidence intervals in stationary autocorrelated time series

    Get PDF
    In this study we examine in covariance stationary time series the consequences of constructing confidence intervals for the population mean using the classical methodology based on the hypothesis of independence. As criteria we use the actual probability the confidence interval of the classical methodology to include the population mean (actual confidence level), and the ratio of the sampling error of the classical methodology over the corresponding actual one leading to equality between actual and nominal confidence levels. These criteria are computed analytically under different sample sizes, and for different autocorrelation structures. For the AR(1) case, we find significant differentiation in the values taken by the two criteria depending upon the structure and the degree of autocorrelation. In the case of MA(1), and especially for positive autocorrelation, we always find actual confidence levels lower than the corresponding nominal ones, while this differentiation between these two levels is much lower compared to the case of AR(1).Covariance stationary time series; Variance of the sample mean; Actual confidence level

    Robust optimization in simulation: Taguchi and response surface methodology

    Get PDF
    Optimization of simulated systems is tackled by many methods, but most methods assume known environments. This article, however, develops a `robust' methodology for uncertain environments. This methodology uses Taguchi's view of the uncertain world, but replaces his statistical techniques by Response Surface Methodology (RSM). George Box originated RSM, and Douglas Montgomery recently extended RSM to robust optimization of real (non-simulated) systems. We combine Taguchi's view with RSM for simulated systems. We illustrate the resulting methodology through classic Economic Order Quantity (EOQ) inventory models, which demonstrate that robust optimization may require order quantities that differ from the classic EOQ

    Quantifying statistical uncertainty in the attribution of human influence on severe weather

    Get PDF
    Event attribution in the context of climate change seeks to understand the role of anthropogenic greenhouse gas emissions on extreme weather events, either specific events or classes of events. A common approach to event attribution uses climate model output under factual (real-world) and counterfactual (world that might have been without anthropogenic greenhouse gas emissions) scenarios to estimate the probabilities of the event of interest under the two scenarios. Event attribution is then quantified by the ratio of the two probabilities. While this approach has been applied many times in the last 15 years, the statistical techniques used to estimate the risk ratio based on climate model ensembles have not drawn on the full set of methods available in the statistical literature and have in some cases used and interpreted the bootstrap method in non-standard ways. We present a precise frequentist statistical framework for quantifying the effect of sampling uncertainty on estimation of the risk ratio, propose the use of statistical methods that are new to event attribution, and evaluate a variety of methods using statistical simulations. We conclude that existing statistical methods not yet in use for event attribution have several advantages over the widely-used bootstrap, including better statistical performance in repeated samples and robustness to small estimated probabilities. Software for using the methods is available through the climextRemes package available for R or Python. While we focus on frequentist statistical methods, Bayesian methods are likely to be particularly useful when considering sources of uncertainty beyond sampling uncertainty.Comment: 41 pages, 11 figures, 1 tabl

    Quantile-based bias correction and uncertainty quantification of extreme event attribution statements

    Get PDF
    Extreme event attribution characterizes how anthropogenic climate change may have influenced the probability and magnitude of selected individual extreme weather and climate events. Attribution statements often involve quantification of the fraction of attributable risk (FAR) or the risk ratio (RR) and associated confidence intervals. Many such analyses use climate model output to characterize extreme event behavior with and without anthropogenic influence. However, such climate models may have biases in their representation of extreme events. To account for discrepancies in the probabilities of extreme events between observational datasets and model datasets, we demonstrate an appropriate rescaling of the model output based on the quantiles of the datasets to estimate an adjusted risk ratio. Our methodology accounts for various components of uncertainty in estimation of the risk ratio. In particular, we present an approach to construct a one-sided confidence interval on the lower bound of the risk ratio when the estimated risk ratio is infinity. We demonstrate the methodology using the summer 2011 central US heatwave and output from the Community Earth System Model. In this example, we find that the lower bound of the risk ratio is relatively insensitive to the magnitude and probability of the actual event.Comment: 28 pages, 4 figures, 3 table

    Unemployment and Hysteresis: A Nonlinear Unobserved Components A Nonlinear Unobserved Components A Nonlinear Unobserved Components A Nonlinear Unobserved Components A Nonlinear Unobserved Components Approach

    Get PDF
    A new test for hysteresis based on a nonlinear unobserved components model is proposed. Observed unemployment rates are decomposed into a natural rate component and a cyclical component. Threshold type nonlinearities are introduced by allowing past cyclical unemployment to have a different impact on the natural rate depending onthe regime of the economy. The impact of lagged cyclical shocks on thecurrent natural component is the measure of hysteresis. To derive anappropriate p-value for a test for hysteresis two alternative bootstrapalgorithms are proposed: the first is valid under homoskedastic errorsand the second allows for heteroskedasticity of unknown form. A MonteCarlo simulation study shows the good performance of both bootstrapalgorithms. The bootstrap testing procedure is applied to data fromItaly, France and the United States. We find evidence of hysteresis forall countries under study.Hysteresis, Unobserved Components Model, Threshold Autoregressive Models, Nuisance parameters, Bootstrap
    corecore