941 research outputs found

    A Validation Of Critical Constructs Of Essential Evaluator Competency And Evaluation Practice: An Application Of Structural Equation Modeling

    Get PDF
    The study aims to examine the interplay of two critical constructs in evaluation: essential evaluator competency and evaluator practice. The research questions in this study, according to Smith (2008), are essentially, what he defined as “fundamental issues in evaluation.” These issues fall into one or multiple of the four aspects identified in the fundamental issues in evaluation framework: theory, practice, method, and profession. The intertwined nature of these aspects implies the interactive relationships between the two constructs. The study utilizes the structural equation modeling (SEM) methodology, first to examine construct validity and psychometric properties of the measurement scales, and then explore how the two latent variables of evaluator competencies and evaluator practice interact when evaluators conduct evaluations. A random sample of 2,000 was drawn from the American Evaluation Association membership directory (n = 7,700), and 459 evaluators from a variety of backgrounds responded. After analyses in the exploratory, confirmatory, and structural phases, the study confirmed five competency dimensions of evaluative practice, meta-competencies, evaluation knowledge base, project management, and professional development. In addition, analytical results confirmed factor structures of the eight evaluator practice subscales and also revealed four distinct practice patterns, similar to previous research results (Shadish & Epstein, 1987). Despite a small number of significant effects of covariates such as years of experience and evaluation background, multiple indicators multiple causes (MIMIC) model results concluded that the measurement models were mostly invariant across various population groups. Lastly, the structural phase analyses uncovered that the relationship between evaluator self-assessed competencies and evaluator practice patterns are interactive. The findings from the SEM model with self-assessed competencies as predictors indicated that evaluators with higher self-assessed evaluative practice competencies tend to engage in the academic and method-driven practice patterns; Evaluators with higher self-assessed meta-competencies tend to engage in the use-driven practice pattern more frequently. On the other hand, when evaluator practice patterns served as predictors, the results showed that evaluators engaging in the academic pattern more often tended to rate higher of their evaluative practice, meta, and evaluation knowledge base competencies; and evaluators engaging in the use-driven practice pattern tended to rate higher of their competencies in all areas except evaluation knowledge base. The study extends previous research by confirming the factor structures of two critical constructs in the evaluation field and providing empirical support for future studies. The findings contribute to a better understanding of several fundamental issues in evaluation, evaluation professionalization and the general knowledge base of the field

    Factor retention revised: analyzing current practice and developing new methods

    Get PDF

    Risky business: factor analysis of survey data – assessing the probability of incorrect dimensionalisation

    Get PDF
    This paper undertakes a systematic assessment of the extent to which factor analysis the correct number of latent dimensions (factors) when applied to ordered categorical survey items (so-called Likert items). We simulate 2400 data sets of uni-dimensional Likert items that vary systematically over a range of conditions such as the underlying population distribution, the number of items, the level of random error, and characteristics of items and item-sets. Each of these datasets is factor analysed in a variety of ways that are frequently used in the extant literature, or that are recommended in current methodological texts. These include exploratory factor retention heuristics such as Kaiser’s criterion, Parallel Analysis and a non-graphical scree test, and (for exploratory and confirmatory analyses) evaluations of model fit. These analyses are conducted on the basis of Pearson and polychoric correlations.We find that, irrespective of the particular mode of analysis, factor analysis applied to ordered-categorical survey data very often leads to over-dimensionalisation. The magnitude of this risk depends on the specific way in which factor analysis is conducted, the number of items, the properties of the set of items, and the underlying population distribution. The paper concludes with a discussion of the consequences of overdimensionalisation, and a brief mention of alternative modes of analysis that are much less prone to such problems

    Comparison of three computational procedures for solving the number of factors problem in exploratory factor analysis

    Get PDF
    Three computational solutions to the number of factors problem were investigated over a wide variety of typical psychometric situations using Monte Carlo simulated population matrices with known characteristics. The standard error scree, the minimum average partials test, and the technique of parallel analysis were evaluated head-to-head for accuracy. The question of using principal components-based eigenvalues versus common factors-based eigenvalues in the analyses was also investigated. As a benchmark, the commonly used eigenvalues-greater-than-one criterion was included. Across all conditions, the principal components-based version of parallel analysis was found to most accurately recover dimensionality using sample correlation matrices drawn from populations with known, simple factor structures. The high degree of accuracy observed for this method suggests that a workable solution to the age-old number of factors problem may be close at hand

    Investigating the functionality of a self-report instrument to detect autistic traits in a non-clinical college population: Psychometric properties of the short version of autism-spectrum quotient (AQ-26)

    Get PDF
    The present study investigated the dimensionality of the short version of Autism-Spectrum Quotient (AQ-26) (Baron-Cohen et al., 2001) via confirmatory factor analysis (CFA) and exploratory factor analysis (EFA). Designed to screen for autistic traits in a non-clinical adult population, the AQ-26 can potentially be a very useful tool both in research and practice. However, evidence pertaining to the structural validity of the AQ-26 is scarce and inconclusive. Competing factor structure models based on previous research were specified and tested using an American college student sample. None of the theoretically specified models provided adequate fit for the data and the focus of the analysis switched to exploring alternative models and analyzing misfit. Although the structural validity of the AQ-26 was not supported, suggestions for future instrument revisions were made based on the results. Additionally, two scoring schemes were deemed not interchangeable, and the implications of using them were discussed. In summary, the analyses indicated that the AQ-26 needs substantial revision before it can be used in research or practice

    how many dimensions are really being measured

    Get PDF
    This paper assesses the validity of the perception-based governance indicators used by the US Millennium Challenge Account (MCA) for aid allocation decisions. By conducting Explanatory and Confirmatory Factor Analysis of data from 1996 to 2009, we find that although the MCA purports to measure seven distinct dimensions of governance, only two discrete underlying dimensions, the ‘participatory dimension of governance’ and the ‘overall quality of governance,’ can be identified. Our results also show that some of the doubts that have been raised concerning the validity of perception-based governance indicators are less warranted when the indicators are applied exclusively to developing countries

    Factor retention revised: analyzing current practice and developing new methods

    Get PDF

    Student engagement and post-college outcomes: A comparison of formative and reflective models

    Get PDF
    Student engagement is a complex construct that is thought to be related to positive outcomes during and after college. Previous research has defined engagement in diverse ways and there are inconsistencies in the models that are used to measure this construct. Many studies have used a reflective measurement model (i.e., exploratory or confirmatory factor analysis), wherein changes in a latent construct are thought to precede and in some sense, explain variation in observed variables. Others have argued that engagement is best measured using a formative model in which the relationship flows in the opposite direction. In other words, within formative measurement variation in observed indicators precedes, and can in some sense either create or cause a construct. A clear rationale has not been provided for the use of either measurement model. In the current study, I therefore sought to compare a series of reflective and formative measurement models using the Gallup-Purdue Index (GPI; Gallup-Purdue, 2014), an under-examined national instrument that has defined student engagement as three inter-related, albeit distinct, latent constructs: institutional support, institutional attachment, and experiential learning. For the investigation, data were collected from alumni who attended a mid-sized southeastern university and graduated with a bachelor’s degree between 1996 and 2005. The current study occurred within three stages. First, an exploratory factor analysis of GPI engagement items was investigated using a random subsample of 349 respondents. This was followed by the second stage wherein three competing models were tested using confirmatory factor analysis on a random subsample of 700 students. Finally, three formative models were also examined using the second subsample. Results of the analyses provided support for a reflective model of the GPI engagement items. Implications are offered regarding the use of formative and reflective approaches and the conceptualization of student engagement

    New product development in an emerging economy: analysing the role of supplier involvement practices by using Bayesian Markov chain Monte Carlo technique

    Get PDF
    The research question is whether the positive relationship found between supplier involvement practices and new product development performances in developed economies also holds in emerging economies. The role of supplier involvement practices in new product development performance is yet to be substantially investigated in the emerging economies (other than China). This premise was examined by distributing a survey instrument (Jayaram’s (2008) published survey instrument that has been utilised in developed economies) to Malaysian manufacturing companies. To gauge the relationship between the supplier involvement practices and new product development (NPD) project performance of 146 companies, structural equation modelling was adopted. Our findings prove that supplier involvement practices have a significant positive impact on NPD project performance in an emerging economy with respect to quality objectives, design objectives, cost objectives, and “time-to-market” objectives. Further analysis using the Bayesian Markov Chain Monte Carlo algorithm, yielding a more credible and feasible differentiation, confirmed these results (even in the case of an emerging economy) and indicated that these practices have a 28% impact on variance of NPD project performance. This considerable effect implies that supplier involvement is a must have, although further research is needed to identify the contingencies for its practices
    corecore