133,645 research outputs found

    An investigation of L-moments and the generalized logistic distribution: applied as a new way to model ice strength

    Get PDF
    In cold ocean research the development of statistical techniques useful in the analysis of cold ocean data such as ice strength is of important practical concern. A central interest is the identification of and fitting of suitable models to the data, for the analysis of such data. In this practicum we study the theory of L-Moments as a method of distributional identification and parameter estimation. In particular, the Generalized Logistic Distribution (GLD) is fitted to nine data sets consisting of breaking strength measurements of different types of ice using the method of L-Moments. The results compare favorably to the original analysis of the data based on Maximum Likelihood fitting of the Weibull distribution. The asymptotic distribution of the L-Moment estimators is derived, and a test for the symmetry of the GLD, based on these asymptotic results, is developed. A Monte Carlo simulation study demonstrates the performance of the method of L-Moments for the estimation of the parameters of the GLD and compares it to the method Maximum Likelihood and the method of Moments. L-Moment estimators are easy to compute and perform consistently well across a wide range of parameter values. The method was found to be a simple and reliable method for estimation and distributional identification and thus it provides an attractive alternative method to the standard techniques. The application of this method to real data illustrates the implementation of the method and the contexts in which the method is useful

    A comparison of the L2 minimum distance estimator and the EM-algorithm when fitting k-component univariate normal mixtures

    Get PDF
    The method of maximum likelihood using the EM-algorithm for fitting finite mixtures of normal distributions is the accepted method of estimation ever since it has been shown to be superior to the method of moments. Recent books testify to this. There has however been criticism of the method of maximum likelihood for this problem, the main criticism being when the variances of component distributions are unequal the likelihood is in fact unbounded and there can be multiple local maxima. Another major criticism is that the maximum likelihood estimator is not robust. Several alternative minimum distance estimators have since been proposed as a way of dealing with the first problem. This paper deals with one of these estimators which is not only superior due to its robustness, but in fact can have an advantage in numerical studies even at the model distribution. Importantly, robust alternatives of the EM-algorithm, ostensibly fitting t distributions when in fact the data are mixtures of normals, are also not competitive at the normal mixture model when compared to the chosen minimum distance estimator. It is argued for instance that natural processes should lead to mixtures whose component distributions are normal as a result of the Central Limit Theorem. On the other hand data can be contaminated because of extraneous sources as are typically assumed in robustness studies. This calls for a robust estimato

    Econometric analysis of measurement error in panel data

    Get PDF
    Panel data consist of measurements taken from several individuals over time. Correlation among measurements taken from the same individual are often accounted for using random effect and random coefficient models. Panel data analysis that accounts for measurement error in the explanatory variables has not been thoroughly studied. This dissertation investigates statistical issues associated with two types of measurement error models for panel data;The first paper considers identification and estimation of a random effect model when some explanatory variables are measured with error. Here, individual heterogeneity is assumed to be manifested in intercepts that randomly differ across individuals. Identification of model parameters given the first two moments of observed variables is examined, and relatively unrestrictive sufficient conditions for identification are obtained. Estimation based on maximum normal likelihood is proposed. This method can be easily implemented using available computer packages that perform moment structure analysis. Compared to the only existing procedure based on instrumental variables, the new method is shown to be more efficient and to have much wider applicability. Standard error estimates and goodness-of-fit statistics obtained under the assumption of normally distributed observations are shown to be asymptotically valid for a broad class of non-normal observations. Simulation results demonstrating the efficiency and usefulness of the new procedure are presented;The second paper deals with the random coefficient model with measurement error, where all regression coefficients randomly differ across individuals. Two procedures are proposed for model fitting and estimation. The generalized least squares method is developed for the first two sample moments with a distribution-free estimate of the weight. Since this method tends to yield very variable estimates in small samples, an alternative method, the pseudo maximum normal likelihood procedure is also developed. The latter, obtained by maximizing a hypothetical normal likelihood for the first two sample moments, produces relative stable estimates in most samples. Asymptotic properties of the two procedures are derived and are used to obtain valid standard errors of the estimators. Numerical results showing the finite-sample properties of these estimators are also reported

    Dark matter distribution in dwarf spheroidal galaxies

    Get PDF
    We study the distribution of dark matter in dwarf spheroidal galaxies by modelling the moments of their line-of-sight velocity distributions. We discuss different dark matter density profiles, both cuspy and possessing flat density cores. The predictions are made in the framework of standard dynamical theory of two-component (stars and dark matter) spherical systems with different velocity distributions. We compare the predicted velocity dispersion profiles to observations in the case of Fornax and Draco dwarfs. For isotropic models the dark haloes with cores are found to fit the data better than those with cusps. Anisotropic models are studied by fitting two parameters, dark mass and velocity anisotropy, to the data. In this case all profiles yield good fits but the steeper the cusp of the profile, the more tangential is the velocity distribution required to fit the data. To resolve this well-known degeneracy of density profile versus velocity anisotropy we obtain predictions for the kurtosis of the line-of-sight velocity distribution for models found to provide best fits to the velocity dispersion profiles. It turns out that profiles with cores typically yield higher values of kurtosis which decrease more steeply with distance than the cuspy profiles, which will allow to discriminate between the profiles once the kurtosis measurements become available. We also show that with present quality of the data the alternative explanation of velocity dispersions in terms of Modified Newtonian Dynamics cannot yet be ruled out.Comment: 13 pages, 9 figures, 3 tables, accepted for publication in MNRAS. Significantly revised, conclusions weakened, predictions for the kurtosis of the line-of-sight velocity distribution adde

    A semi-parametric model for circular data based on mixtures of beta distributions

    Get PDF
    This paper introduces a new, semi-parametric model for circular data, based on mixtures of shifted, scaled, beta (SSB) densities. This model is more general than the Bernstein polynomial density model which is well known to provide good approximations to any density with finite support and it is shown that, as for the Bernstein polynomial model, the trigonometric moments of the SSB mixture model can all be derived. Two methods of fitting the SSB mixture model are considered. Firstly, a classical, maximum likelihood approach for fitting mixtures of a given number of SSB components is introduced. The Bayesian information criterion is then used for model selection. Secondly, a Bayesian approach using Gibbs sampling is considered. In this case, the number of mixture components is selected via an appropriate deviance information criterion. Both approaches are illustrated with real data sets and the results are compared with those obtained using Bernstein polynomials and mixtures of von Mises distributions

    Rethinking CMB foregrounds: systematic extension of foreground parameterizations

    Full text link
    Future high-sensitivity measurements of the cosmic microwave background (CMB) anisotropies and energy spectrum will be limited by our understanding and modeling of foregrounds. Not only does more information need to be gathered and combined, but also novel approaches for the modeling of foregrounds, commensurate with the vast improvements in sensitivity, have to be explored. Here, we study the inevitable effects of spatial averaging on the spectral shapes of typical foreground components, introducing a moment approach, which naturally extends the list of foreground parameters that have to be determined through measurements or constrained by theoretical models. Foregrounds are thought of as a superposition of individual emitting volume elements along the line of sight and across the sky, which then are observed through an instrumental beam. The beam and line of sight averages are inevitable. Instead of assuming a specific model for the distributions of physical parameters, our method identifies natural new spectral shapes for each foreground component that can be used to extract parameter moments (e.g., mean, dispersion, cross-terms, etc.). The method is illustrated for the superposition of power-laws, free-free spectra, gray-body and modified blackbody spectra, but can be applied to more complicated fundamental spectral energy distributions. Here, we focus on intensity signals but the method can be extended to the case of polarized emission. The averaging process automatically produces scale-dependent spectral shapes and the moment method can be used to propagate the required information across scales in power spectrum estimates. The approach is not limited to applications to CMB foregrounds but could also be useful for the modeling of X-ray emission in clusters of galaxies.Comment: 19 pages, 8 figures, accepted by MNRAS, minor revision

    Flexible Tweedie regression models for continuous data

    Full text link
    Tweedie regression models provide a flexible family of distributions to deal with non-negative highly right-skewed data as well as symmetric and heavy tailed data and can handle continuous data with probability mass at zero. The estimation and inference of Tweedie regression models based on the maximum likelihood method are challenged by the presence of an infinity sum in the probability function and non-trivial restrictions on the power parameter space. In this paper, we propose two approaches for fitting Tweedie regression models, namely, quasi- and pseudo-likelihood. We discuss the asymptotic properties of the two approaches and perform simulation studies to compare our methods with the maximum likelihood method. In particular, we show that the quasi-likelihood method provides asymptotically efficient estimation for regression parameters. The computational implementation of the alternative methods is faster and easier than the orthodox maximum likelihood, relying on a simple Newton scoring algorithm. Simulation studies showed that the quasi- and pseudo-likelihood approaches present estimates, standard errors and coverage rates similar to the maximum likelihood method. Furthermore, the second-moment assumptions required by the quasi- and pseudo-likelihood methods enables us to extend the Tweedie regression models to the class of quasi-Tweedie regression models in the Wedderburn's style. Moreover, it allows to eliminate the non-trivial restriction on the power parameter space, and thus provides a flexible regression model to deal with continuous data. We provide \texttt{R} implementation and illustrate the application of Tweedie regression models using three data sets.Comment: 34 pages, 8 figure
    • …
    corecore