659 research outputs found

    El canonista "Alexander Carrerius"

    Get PDF

    Efficiency of public and publicly-subsidised high schools in Spain. Evidence from PISA 2006

    Get PDF
    The purpose of this paper is to compare the efficiency of the Spanish public and publicly-subsidised private high schools using Data Envelopment Analysis (DEA) fed by the results provided by a hierarchical linear model (HLM) applied to PISA-2006 (Programme for International Students Assessment) microdata. This study places special emphasis on the estimation of the determinants of school outcomes, the educational production function being estimated through an HLM that takes into account the nested nature of PISA data. Inefficiencies are then measured through the DEA and decomposed into managerial (related to individual performance) and programme (related to structural differences between management models), following Silva Portela and Thanassoulis (2001) approach. Once differences in pupils’ background and individual management inefficiencies have been eliminated, results reveal that Spanish public high schools are more efficient than publicly-subsidised private ones

    Comparison of maximum likelihood and unweighted least squares estimation methods in confirmatory factor analysis by Monte Carlo simulation

    Full text link
    En este artículo se investiga la recuperación de factores débiles en el contexto del análisis factorial confirmatorio. La investigación previa se limita al caso del análisis factorial exploratorio. El estudio se realiza mediante simulación Monte Carlo con las siguientes condiciones: comparación entre los métodos de estimación de máxima verosimilitud (ML) y mínimos cuadrados no ponderados (ULS), tamaño muestral (100, 300, 500, 1.000 y 2.000) y nivel de debilidad del factor (saturaciones de 0.25, 0.40 y 0.50). Los resultados indican que con tamaños muestrales pequeños el método ULS recupera el factor débil en muchos de los casos en que ML falla. Esta ventaja está relacionada con la ocurrencia de casos Heywood. Asimismo, la recuperación del factor débil mejora a medida que el nivel de debilidad es menor y que el modelo incluye mayor número de factoresThis article examines the recovery of weak factors in the context of confirmatory factor analysis. Previous research only refers to exploratory factor analysis. The study is done by Monte Carlo simulation with the following conditions: comparison of maximum likelihood (ML) and unweighted least squares (ULS) estimation methods, sample size (100, 300, 500, 1.000 and 2.000) and level of factor weakness (loadings of 0.25, 0.40 and 0.50). Results show that with small sample sizes ML failed to recover the weak factor while ULS succeed in many cases. This advantage is related to the occurrence of Heywood cases. Also the weak factor recovery improves as the level of factor weakness decreases and the number of factors in the model increasesEste trabajo está financiado por los proyectos PBR-541A-2-640 de la DGICYT y 06/HSE/0005/2004 de la Comunidad de Madri

    El análisis factorial de datos ipsativos: un estudio de simulación

    Full text link
    En este trabajo se expone un resumen sobre cómo proceder para llevar a cabo un análisis factorial cuando los ítems tienen formato ipsativo. Los procedimientos clásicos de factorización no pueden emplearse puesto que la matriz de covarianzas es singular. Adicionalmente, se revisan los estudios publicados en la literatura previa sobre las condiciones óptimas para la factorización de datos ipsatizados y se presentan los resultados de un estudio de simulación que explora diferentes condiciones: tamaño de la muestra, complejidad del modelo y especifi cación del modelo (correcta vs incorrecta). Los resultados indican que la factorización de los datos ipsatizados ha de hacerse con precaución, en particular si se sospecha que el modelo está incorrectamente especificado e incluye un número menor de factoresFactor analysis of ipsative data: A simulation study. This paper introduces a summary on how to proceed to conduct a factor analysis when the input data are ipsative. The classical factor analysis procedures cannot be used because the covariance matrix is singular. Additionally, previous research on the optimal conditions to conduct factor analysis for ipsatized data is reviewed, and the results of a simulation study are presented. The study includes conditions of sample size, model complexity, and model specification (correct vs. incorrect). The results suggest that researchers should be careful when factor analyzing ipsatized data, particularly if they suspect that the model is incorrectly specified and includes a smaller number of factorsEste trabajo ha sido parcialmente financiado por los proyectos CCG08-UAM/ESP-3951 de la Comunidad de Madrid y PSI2008-01685/PSIC del Ministerio de Ciencia e Innovació

    Bayesian dimensionality assessment for the multidimensional nominal response model

    Full text link
    This article introduces Bayesian estimation and evaluation procedures for the multidimensional nominal response model. The utility of this model is to perform a nominal factor analysis of items that consist of a finite number of unordered response categories. The key aspect of the model, in comparison with traditional factorial model, is that there is a slope for each response category on the latent dimensions, instead of having slopes associated to the items. The extended parameterization of the multidimensional nominal response model requires large samples for estimation. When sample size is of a moderate or small size, some of these parameters may be weakly empirically identifiable and the estimation algorithm may run into difficulties. We propose a Bayesian MCMC inferential algorithm to estimate the parameters and the number of dimensions underlying the multidimensional nominal response model. Two Bayesian approaches to model evaluation were compared: discrepancy statistics (DIC, WAICC, and LOO) that provide an indication of the relative merit of different models, and the standardized generalized discrepancy measure that requires resampling data and is computationally more involved. A simulation study was conducted to compare these two approaches, and the results show that the standardized generalized discrepancy measure can be used to reliably estimate the dimensionality of the model whereas the discrepancy statistics are questionable. The paper also includes an example with real data in the context of learning styles, in which the model is used to conduct an exploratory factor analysis of nominal dataThis research was partially supported by grants PSI2012-31958 and PSI2015-66366-P from the Ministerio de Economía y Competitividad (Spain

    Application of a politomous IRT model to obtain commensurate units measures in a person-organization fit scale

    Full text link
    En este artículo se aborda el problema del uso de métodos directos para la comprobación del criterio de ‘conmensurabilidad de unidades de medida’ en las medidas sobre ajuste persona-ambiente mediante la aplicación de un modelo politómico de rasgo latente. Se utilizó el modelo de respuesta nominal de Bock (1972; 1997) para evaluar el ajuste estadístico de las puntuaciones en las escalas de medida P y O de un cuestionario de ajuste persona-organización con datos reales de 591 sujetos. Los resultados indicaron que el método propuesto constituye una aproximación inicial al problema apropiada, ya que proporciona puntuaciones normalizadas lo que permite una interpretación más sencilla de las unidades de medida en P y O; aunque también presenta una serie de limitaciones que han de ser consideradas en futuras investigacionesThe present article explores the problem of the development of direct methods to achieve the criteria of ‘commensurate units’ in person-environment fit measures by applying a politomous model of latent trait. The response nominal model by Bock (1972; 1997) was used to assess the statistical goodness of fit of two P and O parallel measures of a person-organization (P-O) fit questionnaire with real data from 591 subjects. Results indicated that the proposed method is an appropriate approximation to the problem because it produces normal scores and this allows an easier interpretation of the units on P and O; However, it also presents some limitations that should be addressed in future researchEsta investigación ha sido financiada, en parte, por los proyectos SEC93-0801 y PB97-0049 de la DGICY

    La composición del decreto de Graciano

    Get PDF
    corecore