123 research outputs found
Recommended from our members
Why It Is Hard to Find Genes Associated With Social Science Traits: Theoretical and Empirical Considerations
OBJECTIVES:
We explain why traits of interest to behavioral scientists may have a genetic architecture featuring hundreds or thousands of loci with tiny individual effects rather than a few with large effects and why such an architecture makes it difficult to find robust associations between traits and genes.
METHODS:
We conducted a genome-wide association study at 2 sites, Harvard University and Union College, measuring more than 100 physical and behavioral traits with a sample size typical of candidate gene studies. We evaluated predictions that alleles with large effect sizes would be rare and most traits of interest to social science are likely characterized by a lack of strong directional selection. We also carried out a theoretical analysis of the genetic architecture of traits based on R.A. Fisher's geometric model of natural selection and empirical analyses of the effects of selection bias and phenotype measurement stability on the results of genetic association studies.
RESULTS:
Although we replicated several known genetic associations with physical traits, we found only 2 associations with behavioral traits that met the nominal genome-wide significance threshold, indicating that physical and behavioral traits are mainly affected by numerous genes with small effects.
CONCLUSIONS:
The challenge for social science genomics is the likelihood that genes are connected to behavioral variation by lengthy, nonlinear, interactive causal chains, and unraveling these chains requires allying with personal genomics to take advantage of the potential for large sample sizes as well as continuing with traditional epidemiological studies.EconomicsPsycholog
Recommended from our members
Computational Psychometrics for Item-based Computerized Adaptive Learning
With advances in computer technology and expanded access to educational data, psychometrics faces new opportunities and challenges for enhancing pattern discovery and decision-making in testing and learning. In this dissertation, I introduced three computational psychometrics studies for solving the technical problems in item-based computerized adaptive learning (CAL) systems related to dynamic measurement, diagnosis, and recommendation based on Bayesian item response theory (IRT).
For the first study, I introduced a new knowledge tracing (KT) model, dynamic IRT (DIRT), which can iteratively update the posterior distribution of latent ability based on moment match approximation and capture the uncertainty of ability change during the learning process. For dynamic measurement, DIRT has advantages in interpretation, flexibility, computation cost, and implementability. For the second study, A new measurement model, named multilevel and multidimensional item response theory with Q matrix (MMIRT-Q), was proposed to provide fine-grained diagnostic feedback. I introduced sequential Monte Carlo (SMC) for online estimation of latent abilities.
For the third study, I proposed the maximum expected ratio of posterior variance reduction criterion (MERPV) for testing purposes and the maximum expected improvement in posterior mean (MEIPM) criterion for learning purposes under the unified framework of IRT. With these computational psychometrics solutions, we can improve the students’ learning and testing experience with accurate psychometrics measurement, timely diagnosis feedback, and efficient item selection
A preliminary investigation into the patterns of performance on a computerized adaptive test battery implications for admissions and placement
The fallibility of human judgment in the making of decisions requires the use of tests to enhance decision-making processes. Although testing is surrounded with issues of bias and fairness, it remains the best means of facilitating decisions over more subjective alternatives. As a country in transition, all facets of South African society are being transformed. The changes taking place within the tertiary education system to redress the legacy of Apartheid, coincide with an international trend of transforming higher education. One important area that is being transformed relates to university entrance requirements and admissions procedures. In South Africa, these were traditionally based on matriculation performance, which has been found to be a more variable predictor of academic success for historically disadvantaged students. Alternative or revised admissions procedures have been implemented at universities throughout the country, in conjunction with academic development programmes. However, it is argued in this dissertation that a paradigm shift is necessary to conceptualise admissions and placement assessment in a developmentally oriented way. Furthermore, it is motivated that it is important to keep abreast of advances in theory, such as item response theory (IRT) and technology, such as computerized adaptive testing (CAT), in test development to enhance the effectiveness of selecting and placing learners in tertiary programmes. This study focuses on investigating the use of the Accuplacer Computerized Placement Tests (CPTs), an adaptive test battery that was developed in the USA, to facilitate unbiased and fair admissions, placement and development decisions in the transforming South African context. The battery has been implemented at a university in the Eastern Cape and its usefulness was investigated for 193 participants, divided into two groups of degree programmes, depending on whether or not admission to the degree required mathematics as a matriculation subject. Mathematics based degree programme learners (n = 125) wrote three and non-mathematics based degree programme learners (n = 68) wrote two tests of the Accuplacer test battery. Correlations were computed between the Accuplacer scores and matriculation performance, and between the Accuplacer scores, matriculation performance and academic results. All yielded significant positive relationships excepting for the one subtest of the Accuplacer with academic performance for the non-mathematics based degree group. Multiple correlations for both groups indicated that the Accuplacer scores and matriculation results contribute unique information about academic performance. Cluster analysis for both groups yielded three underlying patterns of performance in the data sets. An attempt was made to validate the cluster groups internally through a MANOVA and single-factor ANOVAs. It was found that Accuplacer subtests and matriculation results do discriminate to an extent among clusters of learners in both groups of degree programmes investigated. Clusters were described in terms of demographic information and it was determined that the factors of culture and home language and how they relate to cluster group membership need further investigation. The main suggestion flowing from these findings is that an attempt be made to confirm the results with a larger sample size and for different cultural and language groups
Technology and Testing
From early answer sheets filled in with number 2 pencils, to tests administered by mainframe computers, to assessments wholly constructed by computers, it is clear that technology is changing the field of educational and psychological measurement. The numerous and rapid advances have immediate impact on test creators, assessment professionals, and those who implement and analyze assessments. This comprehensive new volume brings together leading experts on the issues posed by technological applications in testing, with chapters on game-based assessment, testing with simulations, video assessment, computerized test development, large-scale test delivery, model choice, validity, and error issues. Including an overview of existing literature and ground-breaking research, each chapter considers the technological, practical, and ethical considerations of this rapidly-changing area. Ideal for researchers and professionals in testing and assessment, Technology and Testing provides a critical and in-depth look at one of the most pressing topics in educational testing today
Within-Item Interactions in Bifactor Models for Ordered-Categorical Item Responses
Recent research in multidimensional item response theory has introduced within-item interaction effects between latent dimensions in the prediction of item responses. The objective of this study was to extend this research to bifactor models to include an interaction effect between the general and specific latent variables measured by an item. Specifically, this research investigates model building approaches to be used when estimating these effects in empirical data and the potential adverse impact of ignoring interaction effects when present in items modeled with the bifactor model. Two simulation studies were conducted with data generated to follow a bifactor 2-parameter normal ogive model and a bifactor graded response model without interaction effects and with varying numbers of items with interaction effects. Model parameters were then estimated from a bifactor model without interactions, with all possible interactions, and with interactions estimated to match the data-generated interactions. The data-generating model was generally favored in relative model comparisons, indexed by deviance information criteria (DIC). Item and respondent parameters were recovered best when the generating model matched the estimated model across all data-generating conditions. Item interaction parameters had small bias, absolute bias, and root mean squared errors decreased with a larger sample size. Regarding model refinement strategies, the highest density intervals and credible intervals correctly identified noninteracting items as not having an interaction at a higher rate compared to interacting items that were generated to have an interaction. A bifactor model with all, none, and reduced interactions was estimated in two empirical data sets with applications in educational measurement and psychological assessment. Results were evaluated in light of the poor performance of the parameter refinement and model comparison strategies investigated in the simulation studies. Implications of this research and future directions of study are discussed
Theoretical and Practical Advances in Computer-based Educational Measurement
This open access book presents a large number of innovations in the world of operational testing. It brings together different but related areas and provides insight in their possibilities, their advantages and drawbacks. The book not only addresses improvements in the quality of educational measurement, innovations in (inter)national large scale assessments, but also several advances in psychometrics and improvements in computerized adaptive testing, and it also offers examples on the impact of new technology in assessment. Due to its nature, the book will appeal to a broad audience within the educational measurement community. It contributes to both theoretical knowledge and also pays attention to practical implementation of innovations in testing technology
Fairness in Educational Assessment and Measurement
The importance of fairness, validity, and accessibility in assessment is greater than ever as testing expands to include more diverse populations, more complex purposes, and more sophisticated technologies. This book offers a detailed account of fairness in assessment, and illustrates the interplay between assessment and broader changes in education. In 16 chapters written by leading experts, this volume explores the philosophical, technical, and practical questions surrounding fair measurement. Fairness in Educational Assessment and Measurement addresses issues pertaining to the construction, administration, and scoring of tests, the comparison of performance across test takers, grade levels and tests, and the uses of educational test scores. Perfect for researchers and professionals in test development, design, and administration, Fairness in Educational Assessment and Measurement presents a diverse array of perspectives on this topic of enduring interest
Advancing Human Assessment: The Methodological, Psychological and Policy Contributions of ETS
​This book describes the extensive contributions made toward the advancement of human assessment by scientists from one of the world’s leading research institutions, Educational Testing Service. The book’s four major sections detail research and development in measurement and statistics, education policy analysis and evaluation, scientific psychology, and validity. Many of the developments presented have become de-facto standards in educational and psychological measurement, including in item response theory (IRT), linking and equating, differential item functioning (DIF), and educational surveys like the National Assessment of Educational Progress (NAEP), the Programme of international Student Assessment (PISA), the Progress of International Reading Literacy Study (PIRLS) and the Trends in Mathematics and Science Study (TIMSS). In addition to its comprehensive coverage of contributions to the theory and methodology of educational and psychological measurement and statistics, the book gives significant attention to ETS work in cognitive, personality, developmental, and social psychology, and to education policy analysis and program evaluation. The chapter authors are long-standing experts who provide broad coverage and thoughtful insights that build upon decades of experience in research and best practices for measurement, evaluation, scientific psychology, and education policy analysis. Opening with a chapter on the genesis of ETS and closing with a synthesis of the enormously diverse set of contributions made over its 70-year history, the book is a useful resource for all interested in the improvement of human assessment
- …