1,378 research outputs found
A generic algorithm for reducing bias in parametric estimation
A general iterative algorithm is developed for the computation
of reduced-bias parameter estimates in regular statistical models through
adjustments to the score function. The algorithm unifies and provides appealing new interpretation for iterative methods that have been published
previously for some specific model classes. The new algorithm can usefully be viewed as a series of iterative bias corrections, thus facilitating the
adjusted score approach to bias reduction in any model for which the first-
order bias of the maximum likelihood estimator has already been derived.
The method is tested by application to a logit-linear multiple regression
model with beta-distributed responses; the results confirm the effectiveness
of the new algorithm, and also reveal some important errors in the existing
literature on beta regression
Boundary kernels for adaptive density estimators on regions with irregular boundaries
AbstractIn some applications of kernel density estimation the data may have a highly non-uniform distribution and be confined to a compact region. Standard fixed bandwidth density estimates can struggle to cope with the spatially variable smoothing requirements, and will be subject to excessive bias at the boundary of the region. While adaptive kernel estimators can address the first of these issues, the study of boundary kernel methods has been restricted to the fixed bandwidth context. We propose a new linear boundary kernel which reduces the asymptotic order of the bias of an adaptive density estimator at the boundary, and is simple to implement even on an irregular boundary. The properties of this adaptive boundary kernel are examined theoretically. In particular, we demonstrate that the asymptotic performance of the density estimator is maintained when the adaptive bandwidth is defined in terms of a pilot estimate rather than the true underlying density. We examine the performance for finite sample sizes numerically through analysis of simulated and real data sets
Testing futures returns predictability : implications for hedgers.
The predictability of futures returns is investigated using a semiparametric approach where it is assumed that the expected returns depend non parametrically on a combination of predictors. We first collapse the forecasting variables into a single index variable where the weights are identified up to scale, using the average derivative estimator proposed by Stoker (1986). We then use the Nadaraya-Watson kernel estimator to calculate (and visually depict) the relation between the estimated index and the expected futures returns. An application to four agricultural commodity futures illustrates the technique. The results indicate that for each of the commodities considered, the estimated index contains statistically significant information regarding the expected futures returns. Economic implications for a non-infinitely risk averse hedger are also discussed.Average derivative estimates; futures market; Hedging; Futures; Implications; Information;
Sequential Empirical Bayes method for filtering dynamic spatiotemporal processes
We consider online prediction of a latent dynamic spatiotemporal process and
estimation of the associated model parameters based on noisy data. The problem
is motivated by the analysis of spatial data arriving in real-time and the
current parameter estimates and predictions are updated using the new data at a
fixed computational cost. Estimation and prediction is performed within an
empirical Bayes framework with the aid of Markov chain Monte Carlo samples.
Samples for the latent spatial field are generated using a sampling importance
resampling algorithm with a skewed-normal proposal and for the temporal
parameters using Gibbs sampling with their full conditionals written in terms
of sufficient quantities which are updated online. The spatial range parameter
is estimated by a novel online implementation of an empirical Bayes method,
called herein sequential empirical Bayes method. A simulation study shows that
our method gives similar results as an offline Bayesian method. We also find
that the skewed-normal proposal improves over the traditional Gaussian
proposal. The application of our method is demonstrated for online monitoring
of radiation after the Fukushima nuclear accident
Second-Order Accurate Inference on Simple, Partial, and Multiple Correlations
This article develops confidence interval procedures for functions of simple, partial, and squared multiple correlation coefficients. It is assumed that the observed multivariate data represent a random sample from a distribution that possesses infinite moments, but there is no requirement that the distribution be normal. The coverage error of conventional one-sided large sample intervals decreases at rate 1√n as n increases, where n is an index of sample size. The coverage error of the proposed intervals decreases at rate 1/n as n increases. The results of a simulation study that evaluates the performance of the proposed intervals is reported and the intervals are illustrated on a real data set
Likelihood-based inference for the power regression model
In this paper we investigate an extension of the power-normal model, called the alpha-power model and specialize it to linear and nonlinear regression models, with and without correlated errors. Maximum likelihood estimation is considered with explicit derivation of the observed and expected Fisher information matrices. Applications are considered for the Australian athletes data set and also to a data set studied in Xie et al. (2009). The main conclusion is that the proposed model can be a viable alternative in situations were the normal distribution is not the most adequate model
Recommended from our members
Some contributions to the analysis of skew data on the line and circle
In the first part of this thesis we consider the skew-normal class of distributions on the line and its limiting general half-normal distribution. Inferential procedures based on the methods of moments and maximum likelihood are developed and their performance assessed using simulation. Data on the strength of glass fibre and the body fat of elite athletes are used to illustrate some of the inferential issues raised. The second part of the thesis is devoted to a consideration of the analysis of skew circular data. First, we derive the large-sample distribution of certain key circular statistics and show how this result provides a basis for inference for the corresponding population measures. Next, tests for circular reflective symmetry about an unknown central direction are investigated. A large-sample test and computer intensive variants of it are developed, and their operating characteristics explored both theoretically and empirically. Subsequently, we consider tests for circular reflective symmetry about a known or specified median axis. Two new procedures are developed for testing for symmetry about a known median axis against skew alternatives, and their operating characteristics compared in a simulation experiment with those of the circular analogues of three linear tests. On the basis of the results obtained from the latter, a simple testing strategy is identified. The performance of the tests against rotation alternatives is also investigated. Throughout, the use of the various tests of symmetry is illustrated using a wide range of circular data sets. Finally, we propose the wrapped skew-normal distribution on the circle as a potential model for circular data. The distribution’s fundamental properties are presented and inference based on the methods of moments and maximum likelihood is explored. Tests for limiting cases of the class are proposed, and a potential use of the distribution is illustrated in the mixture based modelling of data on bird migration
An investigation into the likelihood-based procedures for the construction of confidence intervals for the common odds ratio in K 2 x 2 contingency tables.
This study was undertaken to construct confidence intervals of the common odds ratio using several likelihood based procedures. The likelihood based procedures for the construction of confidence intervals of common odds ratio in K 2 x 2 contingency tables are derived. Simulations are performed to study the properties of these procedures in terms of the tail and coverage probabilities and average lengths of the confidence intervals and the results are presented. Based on the simulation results obtained in this study, it is concluded that the Bartlett method (B) is most suitable for constructing confidence interval for the common odds ratio in large sample design.Dept. of Mathematics and Statistics. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis1994 .T420. Source: Masters Abstracts International, Volume: 34-02, page: 0780. Adviser: S. R. Paul. Thesis (M.Sc.)--University of Windsor (Canada), 1994
- …