708 research outputs found
Simple and accurate one-sided inference from signed roots of likelihood ratios
The authors propose two methods based on the signed root of the likelihood ratio statistic for one-sided testing of a simple null hypothesis about a scalar parameter in the presence of nuisance parameters. Both methods are third-order accurate and utilise simulation to avoid the need for onerous analytical calculations characteristic of competing saddlepoint procedures. Moreover, the new methods do not require specification of ancillary statistics. The methods respect the conditioning associated with similar tests up to an error of third order, and conditioning on ancillary statistics to an error of second order
Improving weighted least squares inference
These days, it is common practice to base inference about the coefficients in a hetoskedastic linear model on the ordinary least squares estimator in conjunction with using heteroskedasticity consistent standard errors. Even when the true form of heteroskedasticity is unknown, heteroskedasticity consistent standard errors can also used to base valid inference on a weighted least squares estimator and using such an estimator can provide large gains in efficiency over the ordinary least squares estimator. However, intervals based on asymptotic approximations with plug-in standard errors often have coverage that is below the nominal level, especially for small sample sizes. Similarly, tests can have null rejection probabilities that are above the nominal level. It is shown that under unknown hereroskedasticy, a bootstrap approximation to the sampling distribution of the weighted least squares estimator is valid, which allows for inference with improved finite-sample properties. For testing linear constraints, permutations tests are proposed which are exact when the error distribution is symmetric and is asymptotically valid otherwise. Another concern that has discouraged the use of weighting is that the weighted least squares estimator may be less efficient than the ordinary least squares estimator when the model used to estimate the unknown form of the heteroskedasticity is misspecified. To address this problem, a new estimator is proposed that is asymptotically at least as efficient as both the ordinary and the weighted least squares estimator. Simulation studies demonstrate the attractive finite-sample properties of this new estimator as well as the improvements in performance realized by bootstrap confidence intervals
Detection and Mitigation of Algorithmic Bias via Predictive Rate Parity
Recently, numerous studies have demonstrated the presence of bias in machine
learning powered decision-making systems. Although most definitions of
algorithmic bias have solid mathematical foundations, the corresponding bias
detection techniques often lack statistical rigor, especially for non-iid data.
We fill this gap in the literature by presenting a rigorous non-parametric
testing procedure for bias according to Predictive Rate Parity, a commonly
considered notion of algorithmic bias. We adapt traditional asymptotic results
for non-parametric estimators to test for bias in the presence of dependence
commonly seen in user-level data generated by technology industry applications
and illustrate how these approaches can be leveraged for mitigation. We further
propose modifications of this methodology to address bias measured through
marginal outcome disparities in classification settings and extend notions of
predictive rate parity to multi-objective models. Experimental results on real
data show the efficacy of the proposed detection and mitigation methods
Bootstrap confidence intervals for predicted rainfall quantiles
Rainfall probability charts have been used to quantify the effect of the Southern Oscillation Index (SOI) on rainfall for many years. To better understand the effect of the SOI phases, we discuss forming confidence intervals on the predicted rainfall quantiles using percentile bootstrap methods
Improved testing inference in mixed linear models
Mixed linear models are commonly used in repeated measures studies. They
account for the dependence amongst observations obtained from the same
experimental unit. Oftentimes, the number of observations is small, and it is
thus important to use inference strategies that incorporate small sample
corrections. In this paper, we develop modified versions of the likelihood
ratio test for fixed effects inference in mixed linear models. In particular,
we derive a Bartlett correction to such a test and also to a test obtained from
a modified profile likelihood function. Our results generalize those in Zucker
et al. (Journal of the Royal Statistical Society B, 2000, 62, 827-838) by
allowing the parameter of interest to be vector-valued. Additionally, our
Bartlett corrections allow for random effects nonlinear covariance matrix
structure. We report numerical evidence which shows that the proposed tests
display superior finite sample behavior relative to the standard likelihood
ratio test. An application is also presented and discussed.Comment: 17 pages, 1 figur
- …