134 research outputs found

    Comments on the Meehl-Waller (2002) procedure for appraisal of path analysis models.

    Get PDF

    Exploratory factor analysis in behavior genetics research: Factor recovery with small sample sizes

    Get PDF
    Results of a Monte Carlo study of exploratory factor analysis demonstrate that in studies characterized by low sample sizes the population factor structure can be adequately recovered if communalities are high, model error is low, and few factors are retained. These are conditions likely to be encountered in behavior genetics research involving mean scores obtained from sets of inbred strains. Such studies are often characterized by a large number of measured variables relative to the number of strains used, highly reliable data, and high levels of communality. This combination of characteristics has special consequences for conducting factor analysis and interpreting results. Given that limitations on sample size are often unavoidable, it is recommended that researchers limit the number of expected factors as much as possible

    Repairing Tom Swift's electric factor analysis machine

    Get PDF
    Proper use of exploratory factor analysis (EFA) requires the researcher to make a series of careful decisions. Despite attempts by Floyd and Widaman (1995), Fabrigar, Wegener, MacCallum, and Strahan (1999), and others to elucidate critical issues involved in these decisions, examples of questionable use of EFA are still common in the applied factor analysis literature. Poor decisions regarding the model to be used, the criteria used to decide how many factors to retain, and the rotation method can have drastic consequences for the quality and meaningfulness of factor analytic results. One commonly used approach--principal components analysis, retention of components with eigenvalues greater than 1.0, and varimax rotation of these components--is shown to have potentially serious negative consequences. In addition, choosing arbitrary thresholds for factor loadings to be considered large, using single indicators for factors, and violating the linearity assumptions underlying EFA can have negative consequences for interpretation of results. It is demonstrated that, when decisions are carefully made, EFA can yield unambiguous and meaningful results

    Alternatives to traditional model comparison strategies for covariance structure models

    Get PDF
    This work was funded in part by National Institute on Drug Abuse Grant DA16883 awarded to the first author while at the University of North Carolina at Chapel Hill

    On the practice of dichotomization of quantitative variables.

    Get PDF

    On the practice of dichotomization of quantitative variables

    Get PDF
    The authors examine the practice of dichotomization of quantitative measures, wherein relationships among variables are examined after 1 or more variables have been converted to dichotomous variables by splitting the sample at some point on the scale(s) of measurement. A common form of dichotomization is the median split, where the independent variable is split at the median to form high and low groups, which are then compared with respect to their means on the dependent variable. The consequences of dichotomization for measurement and statistical analyses are illustrated and discussed. The use of dichotomization in practice is described, and justifications that are offered for such usage are examined. The authors present the case that dichotomization is rarely defensible and often will yield misleading results

    Sample size in factor analysis: The role of model error

    Get PDF
    This article examines effects of sample size and other design features on correspondence between factors obtained from analysis of sample data and those present in the population from which the samples were drawn. We extend earlier work on this question by examining these phenomena in the situation in which the common factor model does not hold exactly in the population. We present a theoretical framework for representing such lack of fit and examine its implications in the population and sample. Based on this approach we hypothesize that lack of fit of the model in the population will not, on the average, influence recovery of population factors in analysis of sample data, regardless of degree of model error and regardless of sample size. Rather, such recovery will be affected only by phenomena related to sampling error which have been studied previously. These hypotheses are investigated and verified in two sampling studies, one using artificial data and one using empirical data

    Use of the extreme groups approach: A critical reexamination and new recommendations

    Get PDF
    Analysis of continuous variables sometimes proceeds by selecting individuals on the basis of extreme scores of a sample distribution and submitting only those extreme scores to further analysis. This sampling method is known as the extreme groups approach (EGA). EGA is often used to achieve greater statistical power in subsequent hypothesis tests. However, there are several largely unrecognized costs associated with EGA that must be considered. The authors illustrate the effects EGA can have on power, standardized effect size, reliability, model specification, and the interpretability of results. Finally, the authors discuss alternative procedures, as well as possible legitimate uses of EGA. The authors urge researchers, editors, reviewers, and consumers to carefully assess the extent to which EGA is an appropriate tool in their own research and in that of others.This work was funded in part by National Institute on Drug Abuse Grant DA16883, awarded to Kristopher J. Preacher

    On the Back Reaction Problem for Gravitational Perturbations

    Get PDF
    We derive the effective energy-momentum tensor for cosmological perturbations and prove its gauge-invariance. The result is applied to study the influence of perturbations on the behaviour of the Friedmann background in inflationary Universe scenarios. We found that the back reaction of cosmological perturbations on the background can become important already at energies below the self-reproduction scale.Comment: 4 pages, uses LATE

    Applications of a New Proposal for Solving the "Problem of Time" to Some Simple Quantum Cosmological Models

    Get PDF
    We apply a recent proposal for defining states and observables in quantum gravity to simple models. First, we consider a Klein-Gordon particle in an ex- ternal potential in Minkowski space and compare our proposal to the theory ob- tained by deparametrizing with respect to a time slicing prior to quantiza- tion. We show explicitly that the dynamics of the deparametrization approach depends on the time slicing. Our proposal yields a dynamics independent of the choice of time slicing at intermediate times but after the potential is turned off, the dynamics does not return to the free particle dynamics. Next we apply our proposal to the closed Robertson-Walker quantum cosmology with a massless scalar field with the size of the universe as our time variable, so the only dynamical variable is the scalar field. We show that the resulting theory has the semi-classical behavior up to the classical turning point from expansion to contraction, i.e., given a classical solution which expands for much longer than the Planck time, there is a quantum state whose dynamical evolution closely approximates this classical solution during the expansion. However, when the "time" gets larger than the classical maximum, the scalar field be- comes "frozen" at its value at the maximum expansion. We also obtain similar results in the Taub model. In an Appendix we derive the form of the Wheeler- DeWitt equation for the Bianchi models by performing a proper quantum reduc- tion of the momentum constraints; this equation differs from the usual one ob- tained by solving the momentum constraints classically, prior to quantization.Comment: 30 pages, LaTeX 3 figures (postscript file or hard copy) available upon request, BUTP-94/1
    corecore