16,523 research outputs found

    From byproduct to design factor. On validating the interpretation of process indicators based on log data

    Get PDF
    International large-scale assessments such as PISA or PIAAC have started to provide public or scientific use files for log data; that is, events, event-related attributes and timestamps of test-takers\u27 interactions with the assessment system. Log data and the process indicators derived from it can be used for many purposes. However, the intended uses and interpretations of process indicators require validation, which here means a theoretical and/or empirical justification that inferences about (latent) attributes of the test-taker\u27s work process are valid. This article reviews and synthesizes measurement concepts from various areas, including the standard assessment paradigm, the continuous assessment approach, the evidence-centered design (ECD) framework, and test validation. Based on this synthesis, we address the questions of how to ensure the valid interpretation of process indicators by means of an evidence-centered design of the task situation, and how to empirically challenge the intended interpretation of process indicators by developing and implementing correlational and/or experimental validation strategies. For this purpose, we explicate the process of reasoning from log data to low-level features and process indicators as the outcome of evidence identification. In this process, contextualizing information from log data is essential in order to reduce interpretative ambiguities regarding the derived process indicators. Finally, we show that empirical validation strategies can be adapted from classical approaches investigating the nomothetic span and construct representation. Two worked examples illustrate possible validation strategies for the design phase of measurements and their empirical evaluation. (DIPF/Orig.

    From byproduct to design factor: on validating the interpretation of process indicators based on log data

    Get PDF
    International large-scale assessments such as PISA or PIAAC have started to provide public or scientific use files for log data; that is, events, event-related attributes and timestamps of test-takers’ interactions with the assessment system. Log data and the process indicators derived from it can be used for many purposes. However, the intended uses and interpretations of process indicators require validation, which here means a theoretical and/or empirical justification that inferences about (latent) attributes of the test-taker’s work process are valid. This article reviews and synthesizes measurement concepts from various areas, including the standard assessment paradigm, the continuous assessment approach, the evidence-centered design (ECD) framework, and test validation. Based on this synthesis, we address the questions of how to ensure the valid interpretation of process indicators by means of an evidence-centered design of the task situation, and how to empirically challenge the intended interpretation of process indicators by developing and implementing correlational and/or experimental validation strategies. For this purpose, we explicate the process of reasoning from log data to low-level features and process indicators as the outcome of evidence identification. In this process, contextualizing information from log data is essential in order to reduce interpretative ambiguities regarding the derived process indicators. Finally, we show that empirical validation strategies can be adapted from classical approaches investigating the nomothetic span and construct representation. Two worked examples illustrate possible validation strategies for the design phase of measurements and their empirical evaluation

    Validation of Score Meaning for the Next Generation of Assessments

    Get PDF
    Despite developments in research and practice on using examinee response process data in assessment design, the use of such data in test validation is rare. Validation of Score Meaning in the Next Generation of Assessments Using Response Processes highlights the importance of validity evidence based on response processes and provides guidance to measurement researchers and practitioners in creating and using such evidence as a regular part of the assessment validation process. Response processes refer to approaches and behaviors of examinees when they interpret assessment situations and formulate and generate solutions as revealed through verbalizations, eye movements, response times, or computer clicks. Such response process data can provide information about the extent to which items and tasks engage examinees in the intended ways. With contributions from the top researchers in the field of assessment, this volume includes chapters that focus on methodological issues and on applications across multiple contexts of assessment interpretation and use. In Part I of this book, contributors discuss the framing of validity as an evidence-based argument for the interpretation of the meaning of test scores, the specifics of different methods of response process data collection and analysis, and the use of response process data relative to issues of validation as highlighted in the joint standards on testing. In Part II, chapter authors offer examples that illustrate the use of response process data in assessment validation. These cases are provided specifically to address issues related to the analysis and interpretation of performance on assessments of complex cognition, assessments designed to inform classroom learning and instruction, and assessments intended for students with varying cultural and linguistic backgrounds

    ALT-C 2010 - Conference Introduction and Abstracts

    Get PDF

    Examining the Effects of Discussion Strategies and Learner Interactions on Performance in Online Introductory Mathematics Courses: An Application of Learning Analytics

    Get PDF
    This dissertation study explored: 1) instructors’ use of discussion strategies that enhance meaningful learner interactions in online discussions and student performance, and 2) learners’ interaction patterns in online discussions that lead to better student performance in online introductory mathematics courses. In particular, the study applied a set of data mining techniques to a large-scale dataset automatically collected by the Canvas Learning Management System (LMS) for five consecutive years at a public university in the U.S., which included 2,869 students enrolled in 72 courses. First, the study found that the courses that posted more open-ended prompts, evaluated students’ discussion messages posted by students, used focused discussion settings (i.e., allowing a single response and replies to that response), and provided more elaborated feedback had higher students final grades than those which did not. Second, the results showed the instructors’ use of discussion strategies (discussion structures) influenced the quantity (volume of discussion), the breadth (distribution of participation throughout the discussion), and the quality of learner interactions (levels of knowledge construction) in online discussions. Lastly, the results also revealed that the students’ messages related to allocentric elaboration (i.e., taking other peers’ contributions in argumentive or evaluative ways) and application (i.e., application of new knowledge) showed the highest predictive value for their course performance. The findings from this study suggest that it is important to provide opportunities for learners to freely discuss course content, rather than creating a discussion task related to producing a correct answer, in introductory mathematics courses. Other findings reported in the study can also serve as guidance for instructors or instructional designers on how to design better online mathematics courses
    corecore