26,151 research outputs found

    Pairwise likelihood ratio tests and model selection criteria for structural equation models with ordinal variables

    Get PDF
    Correlated multivariate ordinal data can be analysed with structural equation models. Parameter estimation has been tackled in the literature using limited-information methods including three-stage least squares and pseudo-likelihood estimation methods such as pairwise maximum likelihood estimation. In this paper, two likelihood ratio test statistics and their asymptotic distributions are derived for testing overall goodness-of fit and nested models respectively under the estimation framework of pairwise maximum likelihood estimation. Simulation results show a satisfactory performance of type I error and power for the proposed test statistics and also suggest that the performance of the proposed test statistics is similar to that of the test statistics derived under the three-stage diagonally weighted and unweighted least squares. Furthermore, the corresponding, under the pairwise framework, model selection criteria, AIC and BIC, show satisfactory results in selecting the right model in our simulation examples. The derivation of the likelihood ratio test statistics and model selection criteria under the pairwise framework together with pairwise estimation provide a flexible framework for fitting and testing structural equation models for ordinal as well as for other types of data. The test statistics derived and the model selection criteria are used on data on `trust in the police' selected from the 2010 European Social Survey. The proposed test statistics and the model selection criteria have been implemented in the R package lavaan

    AIC, Cp and estimators of loss for elliptically symmetric distributions

    Full text link
    In this article, we develop a modern perspective on Akaike's Information Criterion and Mallows' Cp for model selection. Despite the diff erences in their respective motivation, they are equivalent in the special case of Gaussian linear regression. In this case they are also equivalent to a third criterion, an unbiased estimator of the quadratic prediction loss, derived from loss estimation theory. Our first contribution is to provide an explicit link between loss estimation and model selection through a new oracle inequality. We then show that the form of the unbiased estimator of the quadratic prediction loss under a Gaussian assumption still holds under a more general distributional assumption, the family of spherically symmetric distributions. One of the features of our results is that our criterion does not rely on the speci ficity of the distribution, but only on its spherical symmetry. Also this family of laws o ffers some dependence property between the observations, a case not often studied

    A Comparative Review of Dimension Reduction Methods in Approximate Bayesian Computation

    Get PDF
    Approximate Bayesian computation (ABC) methods make use of comparisons between simulated and observed summary statistics to overcome the problem of computationally intractable likelihood functions. As the practical implementation of ABC requires computations based on vectors of summary statistics, rather than full data sets, a central question is how to derive low-dimensional summary statistics from the observed data with minimal loss of information. In this article we provide a comprehensive review and comparison of the performance of the principal methods of dimension reduction proposed in the ABC literature. The methods are split into three nonmutually exclusive classes consisting of best subset selection methods, projection techniques and regularization. In addition, we introduce two new methods of dimension reduction. The first is a best subset selection method based on Akaike and Bayesian information criteria, and the second uses ridge regression as a regularization procedure. We illustrate the performance of these dimension reduction techniques through the analysis of three challenging models and data sets.Comment: Published in at http://dx.doi.org/10.1214/12-STS406 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Material parameter estimation and hypothesis testing on a 1D viscoelastic stenosis model: Methodology

    Get PDF
    This is the post-print version of the final published paper that is available from the link below. Copyright @ 2013 Walter de Gruyter GmbH.Non-invasive detection, localization and characterization of an arterial stenosis (a blockage or partial blockage in the artery) continues to be an important problem in medicine. Partial blockage stenoses are known to generate disturbances in blood flow which generate shear waves in the chest cavity. We examine a one-dimensional viscoelastic model that incorporates Kelvin–Voigt damping and internal variables, and develop a proof-of-concept methodology using simulated data. We first develop an estimation procedure for the material parameters. We use this procedure to determine confidence intervals for the estimated parameters, which indicates the efficacy of finding parameter estimates in practice. Confidence intervals are computed using asymptotic error theory as well as bootstrapping. We then develop a model comparison test to be used in determining if a particular data set came from a low input amplitude or a high input amplitude; this we anticipate will aid in determining when stenosis is present. These two thrusts together will serve as the methodological basis for our continuing analysis using experimental data currently being collected.National Institute of Allergy and Infectious Diseases, Air Force Office of Scientific Research, Department of Education, and Engineering and Physical Sciences Research Council

    A Generic Path Algorithm for Regularized Statistical Estimation

    Full text link
    Regularization is widely used in statistics and machine learning to prevent overfitting and gear solution towards prior information. In general, a regularized estimation problem minimizes the sum of a loss function and a penalty term. The penalty term is usually weighted by a tuning parameter and encourages certain constraints on the parameters to be estimated. Particular choices of constraints lead to the popular lasso, fused-lasso, and other generalized l1l_1 penalized regression methods. Although there has been a lot of research in this area, developing efficient optimization methods for many nonseparable penalties remains a challenge. In this article we propose an exact path solver based on ordinary differential equations (EPSODE) that works for any convex loss function and can deal with generalized l1l_1 penalties as well as more complicated regularization such as inequality constraints encountered in shape-restricted regressions and nonparametric density estimation. In the path following process, the solution path hits, exits, and slides along the various constraints and vividly illustrates the tradeoffs between goodness of fit and model parsimony. In practice, the EPSODE can be coupled with AIC, BIC, CpC_p or cross-validation to select an optimal tuning parameter. Our applications to generalized l1l_1 regularized generalized linear models, shape-restricted regressions, Gaussian graphical models, and nonparametric density estimation showcase the potential of the EPSODE algorithm.Comment: 28 pages, 5 figure
    corecore