545 research outputs found

    Random variate generation and connected computational issues for the Poisson–Tweedie distribution

    Get PDF
    After providing a systematic outline of the stochastic genesis of the Poisson–Tweedie distribution, some computational issues are considered. More specifically, we introduce a closed form for the probability function, as well as its corresponding integral representation which may be useful for large argument values. Several algorithms for generating Poisson–Tweedie random variates are also suggested. Finally, count data connected to the citation profiles of two statistical journals are modeled and analyzed by means of the Poisson–Tweedie distribution

    Generalised linear models for aggregate claims; to Tweedie or not?

    Get PDF
    The compound Poisson distribution with gamma claim sizes is a very common model for premium estimation in Property and Casualty insurance. Under this distributional assumption, generalised linear models (GLMs) are used to estimate the mean claim frequency and severity, then these estimators are simply multiplied to estimate the mean aggregate loss. The Tweedie distribution allows to parametrise the compound Poisson-gamma (CPG) distribution as a member of the exponential dispersion family and then fit a GLM with a CPG distribution for the response. Thus, with the Tweedie distribution it is possible to estimate the mean aggregate loss using GLMs directly, without the need to previously estimate the mean frequency and severity separately. The purpose of this educational note is to explore the differences between these two estimation methods, contrasting the advantages and disadvantages of each

    Flexible Tweedie regression models for continuous data

    Full text link
    Tweedie regression models provide a flexible family of distributions to deal with non-negative highly right-skewed data as well as symmetric and heavy tailed data and can handle continuous data with probability mass at zero. The estimation and inference of Tweedie regression models based on the maximum likelihood method are challenged by the presence of an infinity sum in the probability function and non-trivial restrictions on the power parameter space. In this paper, we propose two approaches for fitting Tweedie regression models, namely, quasi- and pseudo-likelihood. We discuss the asymptotic properties of the two approaches and perform simulation studies to compare our methods with the maximum likelihood method. In particular, we show that the quasi-likelihood method provides asymptotically efficient estimation for regression parameters. The computational implementation of the alternative methods is faster and easier than the orthodox maximum likelihood, relying on a simple Newton scoring algorithm. Simulation studies showed that the quasi- and pseudo-likelihood approaches present estimates, standard errors and coverage rates similar to the maximum likelihood method. Furthermore, the second-moment assumptions required by the quasi- and pseudo-likelihood methods enables us to extend the Tweedie regression models to the class of quasi-Tweedie regression models in the Wedderburn's style. Moreover, it allows to eliminate the non-trivial restriction on the power parameter space, and thus provides a flexible regression model to deal with continuous data. We provide \texttt{R} implementation and illustrate the application of Tweedie regression models using three data sets.Comment: 34 pages, 8 figure

    A generalized Fellner-Schall method for smoothing parameter estimation with application to Tweedie location, scale and shape models

    Get PDF
    We consider the estimation of smoothing parameters and variance components in models with a regular log likelihood subject to quadratic penalization of the model coefficients, via a generalization of the method of Fellner (1986) and Schall (1991). In particular: (i) we generalize the original method to the case of penalties that are linear in several smoothing parameters, thereby covering the important cases of tensor product and adaptive smoothers; (ii) we show why the method's steps increase the restricted marginal likelihood of the model, that it tends to converge faster than the EM algorithm, or obvious accelerations of this, and investigate its relation to Newton optimization; (iii) we generalize the method to any Fisher regular likelihood. The method represents a considerable simplification over existing methods of estimating smoothing parameters in the context of regular likelihoods, without sacrificing generality: for example, it is only necessary to compute with the same first and second derivatives of the log-likelihood required for coefficient estimation, and not with the third or fourth order derivatives required by alternative approaches. Examples are provided which would have been impossible or impractical with pre-existing Fellner-Schall methods, along with an example of a Tweedie location, scale and shape model which would be a challenge for alternative methods
    • …
    corecore