18,770 research outputs found

    Flexible Tweedie regression models for continuous data

    Full text link
    Tweedie regression models provide a flexible family of distributions to deal with non-negative highly right-skewed data as well as symmetric and heavy tailed data and can handle continuous data with probability mass at zero. The estimation and inference of Tweedie regression models based on the maximum likelihood method are challenged by the presence of an infinity sum in the probability function and non-trivial restrictions on the power parameter space. In this paper, we propose two approaches for fitting Tweedie regression models, namely, quasi- and pseudo-likelihood. We discuss the asymptotic properties of the two approaches and perform simulation studies to compare our methods with the maximum likelihood method. In particular, we show that the quasi-likelihood method provides asymptotically efficient estimation for regression parameters. The computational implementation of the alternative methods is faster and easier than the orthodox maximum likelihood, relying on a simple Newton scoring algorithm. Simulation studies showed that the quasi- and pseudo-likelihood approaches present estimates, standard errors and coverage rates similar to the maximum likelihood method. Furthermore, the second-moment assumptions required by the quasi- and pseudo-likelihood methods enables us to extend the Tweedie regression models to the class of quasi-Tweedie regression models in the Wedderburn's style. Moreover, it allows to eliminate the non-trivial restriction on the power parameter space, and thus provides a flexible regression model to deal with continuous data. We provide \texttt{R} implementation and illustrate the application of Tweedie regression models using three data sets.Comment: 34 pages, 8 figure

    Quasi Score is more efficient than Corrected Score in a general nonlinear measurement error model

    Get PDF
    We compare two consistent estimators of the parameter vector beta of a general exponential family measurement error model with respect to their relative efficiency. The quasi score (QS) estimator uses the distribution of the regressor, the corrected score (CS) estimator does not make use of this distribution and is therefore more robust. However, if the regressor distribution is known, QS is asymptotically more efficient than CS. In some cases it is, in fact, even strictly more efficient, in the sense that the difference of the asymptotic covariance matrices of CS and QS is positive definite

    Comparing the efficiency of structural and functional methods in measurement error models

    Get PDF
    The paper is a survey of recent investigations by the authors and others into the relative efficiencies of structural and functional estimators of the regression parameters in a measurement error model. While structural methods, in particular the quasi-score (QS) method, take advantage of the knowledge of the regressor distribution (if available), functional methods, in particular the corrected score (CS) method, discards such knowledge and works even if such knowledge is not available. Among other results, it has been shown that QS is more efficient than CS as long as the regressor distribution is completely known. However, if nuisance parameters in the regressor distribution have to be estimated, this is no more true in general. But by modifying the QS method, the adverse effect of the nuisance parameters can be overcome. For small measurement errors, the efficiencies of QS and CS become almost indistinguishable, whether nuisance parameters are present or not. QS is (asymptotically) biased if the regressor distribution has been misspecified, while CS is always consistent and thus more robust than QS

    Calculation of LTC Premiums based on direct estimates of transition probabilities

    Get PDF
    In this paper we model the life-history of LTC patients using a Markovian multi-state model in order to calculate premiums for a given LTC-plan. Instead of estimating the transition intensities in this model we use the approach suggested by Andersen et al. (2003) for a direct estimation of the transition probabilities. Based on the Aalen-Johansen estimator, an almost unbiased estimator for the transition matrix of a Markovian multi-state model, we calculate so-called pseudo-values, known from Jackknife methods. Further, we assume that the relationship between these pseudo-values and the covariates of our data are given by a GLM with the logit as link-function. Since the GLMs do not allow for correlation between successive observations we use instead the "Generalized Estimating Equations" (GEEs) to estimate the parameters of our regression model. The approach is illustrated using a representative sample from a German LTC portfolio

    Estimating Functions and Equations: An Essay on Historical Developments with Applications to Econometrics

    Get PDF
    The idea of using estimating functions goes a long way back, at least to Karl Pearson's introduction to the method of moments in 1894. It is now a very active area of research in the statistics literature. One aim of this chapter is to provide an account of the developments relating to the theory of estimating functions. Starting from the simple case of a single parameter under independence, we cover the multiparameter, presence of nuisance parameters and dependent data cases. Application of the estimating functions technique to econometrics is still at its infancy. However, we illustrate how this estimation approach could be used in a number of time series models, such as random coefficient, threshold, bilinear, autoregressive conditional heteroscedasticity models, in models of spatial and longitudinal data, and median regression analysis. The chapter is concluded with some remarks on the place of estimating functions in the history of estimation.

    Some Recent Advances in Measurement Error Models and Methods

    Get PDF
    A measurement error model is a regression model with (substantial) measurement errors in the variables. Disregarding these measurement errors in estimating the regression parameters results in asymptotically biased estimators. Several methods have been proposed to eliminate, or at least to reduce, this bias, and the relative efficiency and robustness of these methods have been compared. The paper gives an account of these endeavors. In another context, when data are of a categorical nature, classification errors play a similar role as measurement errors in continuous data. The paper also reviews some recent advances in this field

    Quasi-likelihood for Spatial Point Processes

    Full text link
    Fitting regression models for intensity functions of spatial point processes is of great interest in ecological and epidemiological studies of association between spatially referenced events and geographical or environmental covariates. When Cox or cluster process models are used to accommodate clustering not accounted for by the available covariates, likelihood based inference becomes computationally cumbersome due to the complicated nature of the likelihood function and the associated score function. It is therefore of interest to consider alternative more easily computable estimating functions. We derive the optimal estimating function in a class of first-order estimating functions. The optimal estimating function depends on the solution of a certain Fredholm integral equation which in practice is solved numerically. The approximate solution is equivalent to a quasi-likelihood for binary spatial data and we therefore use the term quasi-likelihood for our optimal estimating function approach. We demonstrate in a simulation study and a data example that our quasi-likelihood method for spatial point processes is both statistically and computationally efficient
    corecore