1,378 research outputs found

    A generic algorithm for reducing bias in parametric estimation

    Get PDF
    A general iterative algorithm is developed for the computation of reduced-bias parameter estimates in regular statistical models through adjustments to the score function. The algorithm unifies and provides appealing new interpretation for iterative methods that have been published previously for some specific model classes. The new algorithm can usefully be viewed as a series of iterative bias corrections, thus facilitating the adjusted score approach to bias reduction in any model for which the first- order bias of the maximum likelihood estimator has already been derived. The method is tested by application to a logit-linear multiple regression model with beta-distributed responses; the results confirm the effectiveness of the new algorithm, and also reveal some important errors in the existing literature on beta regression

    Boundary kernels for adaptive density estimators on regions with irregular boundaries

    Get PDF
    AbstractIn some applications of kernel density estimation the data may have a highly non-uniform distribution and be confined to a compact region. Standard fixed bandwidth density estimates can struggle to cope with the spatially variable smoothing requirements, and will be subject to excessive bias at the boundary of the region. While adaptive kernel estimators can address the first of these issues, the study of boundary kernel methods has been restricted to the fixed bandwidth context. We propose a new linear boundary kernel which reduces the asymptotic order of the bias of an adaptive density estimator at the boundary, and is simple to implement even on an irregular boundary. The properties of this adaptive boundary kernel are examined theoretically. In particular, we demonstrate that the asymptotic performance of the density estimator is maintained when the adaptive bandwidth is defined in terms of a pilot estimate rather than the true underlying density. We examine the performance for finite sample sizes numerically through analysis of simulated and real data sets

    Testing futures returns predictability : implications for hedgers.

    Get PDF
    The predictability of futures returns is investigated using a semiparametric approach where it is assumed that the expected returns depend non parametrically on a combination of predictors. We first collapse the forecasting variables into a single index variable where the weights are identified up to scale, using the average derivative estimator proposed by Stoker (1986). We then use the Nadaraya-Watson kernel estimator to calculate (and visually depict) the relation between the estimated index and the expected futures returns. An application to four agricultural commodity futures illustrates the technique. The results indicate that for each of the commodities considered, the estimated index contains statistically significant information regarding the expected futures returns. Economic implications for a non-infinitely risk averse hedger are also discussed.Average derivative estimates; futures market; Hedging; Futures; Implications; Information;

    Sequential Empirical Bayes method for filtering dynamic spatiotemporal processes

    Get PDF
    We consider online prediction of a latent dynamic spatiotemporal process and estimation of the associated model parameters based on noisy data. The problem is motivated by the analysis of spatial data arriving in real-time and the current parameter estimates and predictions are updated using the new data at a fixed computational cost. Estimation and prediction is performed within an empirical Bayes framework with the aid of Markov chain Monte Carlo samples. Samples for the latent spatial field are generated using a sampling importance resampling algorithm with a skewed-normal proposal and for the temporal parameters using Gibbs sampling with their full conditionals written in terms of sufficient quantities which are updated online. The spatial range parameter is estimated by a novel online implementation of an empirical Bayes method, called herein sequential empirical Bayes method. A simulation study shows that our method gives similar results as an offline Bayesian method. We also find that the skewed-normal proposal improves over the traditional Gaussian proposal. The application of our method is demonstrated for online monitoring of radiation after the Fukushima nuclear accident

    Second-Order Accurate Inference on Simple, Partial, and Multiple Correlations

    Get PDF
    This article develops confidence interval procedures for functions of simple, partial, and squared multiple correlation coefficients. It is assumed that the observed multivariate data represent a random sample from a distribution that possesses infinite moments, but there is no requirement that the distribution be normal. The coverage error of conventional one-sided large sample intervals decreases at rate 1√n as n increases, where n is an index of sample size. The coverage error of the proposed intervals decreases at rate 1/n as n increases. The results of a simulation study that evaluates the performance of the proposed intervals is reported and the intervals are illustrated on a real data set

    Likelihood-based inference for the power regression model

    Get PDF
    In this paper we investigate an extension of the power-normal model, called the alpha-power model and specialize it to linear and nonlinear regression models, with and without correlated errors. Maximum likelihood estimation is considered with explicit derivation of the observed and expected Fisher information matrices. Applications are considered for the Australian athletes data set and also to a data set studied in Xie et al. (2009). The main conclusion is that the proposed model can be a viable alternative in situations were the normal distribution is not the most adequate model

    An investigation into the likelihood-based procedures for the construction of confidence intervals for the common odds ratio in K 2 x 2 contingency tables.

    Get PDF
    This study was undertaken to construct confidence intervals of the common odds ratio using several likelihood based procedures. The likelihood based procedures for the construction of confidence intervals of common odds ratio in K 2 x 2 contingency tables are derived. Simulations are performed to study the properties of these procedures in terms of the tail and coverage probabilities and average lengths of the confidence intervals and the results are presented. Based on the simulation results obtained in this study, it is concluded that the Bartlett method (B) is most suitable for constructing confidence interval for the common odds ratio in large sample design.Dept. of Mathematics and Statistics. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis1994 .T420. Source: Masters Abstracts International, Volume: 34-02, page: 0780. Adviser: S. R. Paul. Thesis (M.Sc.)--University of Windsor (Canada), 1994
    corecore