6,339 research outputs found

    Statistical inference for negative binomial processes with applications to market research

    Get PDF
    The negative binomial distribution (NBD) and negative binomial processes have been used as natural models for events occurring in fields such as accident proneness accidents and sickness market research insurance and risk theory. The fitting of negative binomial processes in practice has mainly focussed on fitting the one-dimensional distribution, namely the NBD, to data. In practice, the parameters of the NBD are usually estimated by using inefficient moment based estimation methods due to the ease in estimating moment based estimators in comparison to maximum likelihood estimators. This thesis develops efficient moment based estimation methods for estimating parameters of the NBD that can be easily implemented in practice. These estimators, called power method estimators, are almost as efficient as maximum likelihood estimators when the sample is independent and identically distributed. For dependent NBD samples, the power method estimators are more efficient than the commonly used method of moments and zero term method estimators. Fitting the one-dimensional marginal distribution of negative binomial processes to data gives partial information as to the adequacy of the process being fitted. This thesis further develops methods of statistical inference for data generated by negative binomial processes by comparing the dynamical properties of the process to the dynamical properties of data. For negative binomial autoregressive processes, the dynamical properties may be checked by using the autocorrelation function. The dynamical properties of the gamma Poisson process are considered by deriving the asymptotic covariance and correlation structures of estimators and functionals of the gamma Poisson process and verifying these structures against data. The adequacy of two negative binomial processes, namely the gamma Poisson process and the negative binomial first-order autoregressive process, as models for consumer buying behavior are considered. The models are fitted to market research data kindly provided by ACNielsen BASES

    Breaking the Waves: A Poisson Regression Approach to Schumpeterian Clustering of Basic Innovations

    Get PDF
    The Schumpeterian theory of long waves has given rise to an intense debate on the existenceof clusters of basic innovations. Silverberg and Lehnert have criticized the empirical part ofthis literature on several methodological accounts. In this paper, we propose the methodologyof Poisson regression as a logical way to incorporate this criticism. We construct a new timeseries for basic innovations (based on previously used time series), and use this to test thehypothesis that basic innovations cluster in time. We define the concept of clustering invarious precise ways before undertaking the statistical tests. The evidence we find onlysupports the ‘weakest’ of our clustering hypotheses, i.e., that the data display overdispersion.We thus conclude that the authors who have argued that a long wave in economic life isdriven by clusters of basic innovations have stretched the statistical evidence too far.research and development ;

    Treating missing values in INAR(1) models

    Get PDF
    Time series models for count data have found increased interest in recent days. The existing literature refers to the case of data that have been fully observed. In the present paper, methods for estimating the parameters of the first-order integer-valued autoregressive model in the presence of missing data are proposed. The first method maximizes a conditional likelihood constructed via the observed data based on the k-step-ahead conditional distributions to account for the gaps in the data. The second approach is based on an iterative scheme where missing values are imputed in order to update the estimated parameters. The first method is useful when the predictive distributions have simple forms. We derive in full details this approach when the innovations are assumed to follow a finite mixture of Poisson distributions. The second method is applicable when there are not closed form expressions for the conditional likelihood or they are hard to derive. Simulation results and comparisons of the methods are reported. The proposed methods are applied to a data set concerning syndromic surveillance during the Athens 2004 Olympic Games.Imputation; Markov Chain EM algorithm; mixed Poisson; discrete valued time series

    Latent Gaussian Count Time Series Modeling

    Full text link
    This paper develops theory and methods for the copula modeling of stationary count time series. The techniques use a latent Gaussian process and a distributional transformation to construct stationary series with very flexible correlation features that can have any pre-specified marginal distribution, including the classical Poisson, generalized Poisson, negative binomial, and binomial count structures. A Gaussian pseudo-likelihood estimation paradigm, based only on the mean and autocovariance function of the count series, is developed via some new Hermite expansions. Particle filtering methods are studied to approximate the true likelihood of the count series. Here, connections to hidden Markov models and other copula likelihood approximations are made. The efficacy of the approach is demonstrated and the methods are used to analyze a count series containing the annual number of no-hitter baseball games pitched in major league baseball since 1893
    corecore