190 research outputs found

    Retest effects in operational selection settings: Development and test of a framework

    Get PDF
    This study proposes a framework for examining the effects of retaking tests in operational selection settings. A central feature of this frame-work is the distinction between within-person and between-person retest effects. This framework is used to develop hypotheses about retest ef-fects for exemplars of 3 types of tests (knowledge tests, cognitive ability tests, and situational judgment tests) and to test these hypotheses in a high stakes selection setting (admission to medical studies in Belgium). Analyses of within-person retest effects showed that mean scores of re-peat test takers were one-third of a standard deviation higher for the knowledge test and situational judgment test and one-half of a standard deviation higher for the cognitive ability test. The validity coefficients for the knowledge test differed significantly depending on whether ex-aminees ’ test scores on the first versus second administration were used, with the latter being more valid. Analyses of between-person retest ef-fects on the prediction of academic performance showed that the same test score led to higher levels of performance for those passing on the first attempt than for those passing on the second attempt. The implications of these results are discussed in light of extant retesting practice. In employment settings, the Uniform Guidelines on Employee Selec-tion Procedures (1978) state that organizations should provide a reasonable opportunity to test takers for retesting. Hence, most organizations in the private and public sector have installed retesting policies in promotion and hiring situations (e.g., Campbell, 2004; McElreath, Bayless, Reilly, & Hayes, 2004). In the educational domain, the Standards for Educational and Psychological Testing (APA/AERA/NCME, 1999) state that retest op-portunities should be provided for tests used for promotion or graduation decisions. The opportunity for retesting is also mandated for tests used in making admission, licensing, or certification decisions. A previous version of this manuscript was presented at the Annual Convention of th

    Creating an Instrument to Measure Student Response to Instructional Practices

    Full text link
    BackgroundCalls for the reform of education in science, technology, engineering, and mathematics (STEM) have inspired many instructional innovations, some research based. Yet adoption of such instruction has been slow. Research has suggested that students’ response may significantly affect an instructor’s willingness to adopt different types of instruction.PurposeWe created the Student Response to Instructional Practices (StRIP) instrument to measure the effects of several variables on student response to instructional practices. We discuss the step‐by‐step process for creating this instrument.Design/MethodThe development process had six steps: item generation and construct development, validity testing, implementation, exploratory factor analysis, confirmatory factor analysis, and instrument modification and replication. We discuss pilot testing of the initial instrument, construct development, and validation using exploratory and confirmatory factor analyses.ResultsThis process produced 47 items measuring three parts of our framework. Types of instruction separated into four factors (interactive, constructive, active, and passive); strategies for using in‐class activities into two factors (explanation and facilitation); and student responses to instruction into five factors (value, positivity, participation, distraction, and evaluation).ConclusionsWe describe the design process and final results for our instrument, a useful tool for understanding the relationship between type of instruction and students’ response.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/136692/1/jee20162_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/136692/2/jee20162.pd
    • 

    corecore