42,835 research outputs found

    Flexible Tweedie regression models for continuous data

    Full text link
    Tweedie regression models provide a flexible family of distributions to deal with non-negative highly right-skewed data as well as symmetric and heavy tailed data and can handle continuous data with probability mass at zero. The estimation and inference of Tweedie regression models based on the maximum likelihood method are challenged by the presence of an infinity sum in the probability function and non-trivial restrictions on the power parameter space. In this paper, we propose two approaches for fitting Tweedie regression models, namely, quasi- and pseudo-likelihood. We discuss the asymptotic properties of the two approaches and perform simulation studies to compare our methods with the maximum likelihood method. In particular, we show that the quasi-likelihood method provides asymptotically efficient estimation for regression parameters. The computational implementation of the alternative methods is faster and easier than the orthodox maximum likelihood, relying on a simple Newton scoring algorithm. Simulation studies showed that the quasi- and pseudo-likelihood approaches present estimates, standard errors and coverage rates similar to the maximum likelihood method. Furthermore, the second-moment assumptions required by the quasi- and pseudo-likelihood methods enables us to extend the Tweedie regression models to the class of quasi-Tweedie regression models in the Wedderburn's style. Moreover, it allows to eliminate the non-trivial restriction on the power parameter space, and thus provides a flexible regression model to deal with continuous data. We provide \texttt{R} implementation and illustrate the application of Tweedie regression models using three data sets.Comment: 34 pages, 8 figure

    Disturbing Extremal Behavior of Spot Rate Dynamics

    Get PDF
    This paper presents a study of extreme interest rate movements in the U.S. Federal Funds market over almost a half century of daily observations from the mid 1950s through the end of 2000. We analyze the fluctuations of the maximal and minimal changes in short term interest rates and test the significance of time-varying paths followed by the mean and volatility of extremes. We formally determine the relevance of introducing trend and serial correlation in the mean, and of incorporating the level and GARCH effects in the volatility of extreme changes in the federal funds rate. The empirical findings indicate the existence of volatility clustering in the standard deviation of extremes, and a significantly positive relationship between the level and the volatility of extremes. The results point to the presence of an autoregressive process in the means of both local maxima and local minima values. The paper proposes a conditional extreme value approach to calculating value at risk by specifying the location and scale parameters of the generalized Pareto distribution as a function of past information. Based on the estimated VaR thresholds, the statistical theory of extremes is found to provide more accurate estimates of the rate of occurrence and the size of extreme observations.extreme value theory, volatility, interest rates, value at risk

    Multilevel IRT Modeling in Practice with the Package mlirt

    Get PDF
    Variance component models are generally accepted for the analysis of hierarchical structured data. A shortcoming is that outcome variables are still treated as measured without an error. Unreliable variables produce biases in the estimates of the other model parameters. The variability of the relationships across groups and the group-effects on individuals' outcomes differ substantially when taking the measurement error in the dependent variable of the model into account. The multilevel model can be extended to handle measurement error using an item response theory (IRT) model, leading to a multilevel IRT model. This extended multilevel model is in particular suitable for the analysis of educational response data where students are nested in schools and schools are nested within cities/countries.\u

    AIC, Cp and estimators of loss for elliptically symmetric distributions

    Full text link
    In this article, we develop a modern perspective on Akaike's Information Criterion and Mallows' Cp for model selection. Despite the diff erences in their respective motivation, they are equivalent in the special case of Gaussian linear regression. In this case they are also equivalent to a third criterion, an unbiased estimator of the quadratic prediction loss, derived from loss estimation theory. Our first contribution is to provide an explicit link between loss estimation and model selection through a new oracle inequality. We then show that the form of the unbiased estimator of the quadratic prediction loss under a Gaussian assumption still holds under a more general distributional assumption, the family of spherically symmetric distributions. One of the features of our results is that our criterion does not rely on the speci ficity of the distribution, but only on its spherical symmetry. Also this family of laws o ffers some dependence property between the observations, a case not often studied

    Cramer Rao-Type Bounds for Sparse Bayesian Learning

    Full text link
    In this paper, we derive Hybrid, Bayesian and Marginalized Cram\'{e}r-Rao lower bounds (HCRB, BCRB and MCRB) for the single and multiple measurement vector Sparse Bayesian Learning (SBL) problem of estimating compressible vectors and their prior distribution parameters. We assume the unknown vector to be drawn from a compressible Student-t prior distribution. We derive CRBs that encompass the deterministic or random nature of the unknown parameters of the prior distribution and the regression noise variance. We extend the MCRB to the case where the compressible vector is distributed according to a general compressible prior distribution, of which the generalized Pareto distribution is a special case. We use the derived bounds to uncover the relationship between the compressibility and Mean Square Error (MSE) in the estimates. Further, we illustrate the tightness and utility of the bounds through simulations, by comparing them with the MSE performance of two popular SBL-based estimators. It is found that the MCRB is generally the tightest among the bounds derived and that the MSE performance of the Expectation-Maximization (EM) algorithm coincides with the MCRB for the compressible vector. Through simulations, we demonstrate the dependence of the MSE performance of SBL based estimators on the compressibility of the vector for several values of the number of observations and at different signal powers.Comment: Accepted for publication in the IEEE Transactions on Signal Processing, 11 pages, 10 figure

    A FLEXIBLE PARAMETRIC GARCH MODEL WITH AN APPLICATION TO EXCHANGE RATES

    Get PDF
    International Relations/Trade, Research Methods/ Statistical Methods,
    corecore