4 research outputs found

    MEASURING LEARNING PROGRESSIONS USING BAYESIAN MODELING IN COMPLEX ASSESSMENTS

    Get PDF
    This research examines issues regarding model estimation and robustness in the use of Bayesian Inference Networks (BINs) for measuring Learning Progressions (LPs). It provides background information on LPs and how they might be used in practice. Two simulation studies are performed, along with real data examples. The first study examines the case of using a BIN to measure one LP, while the items in the second study are designed to measure two LPs. For each study, data are generated under four alternative models, and each of the models is fit to the data. The results are compared in terms of fit, parameter recovery, and classification accuracy for individuals. In the case where one LP was used, two models provided high correct classification rates. When two LPs are being measured the classification rates were not found to be high, although an unconstrained model with freely-estimated conditional probabilities had slightly higher rates than a constrained model in which the conditional probabilities were given by lower-dimensional functions. Overall, while BIN show promise in modeling LPs, further research is needed to determine the conditions under which this modeling approach is appropriate

    Assessing model-based reasoning using evidence-centered design: a suite of research-based design patterns

    No full text
    This Springer Brief provides theory, practical guidance, and support tools to help designers create complex, valid assessment tasks for hard-to-measure, yet crucial, science education standards. Understanding, exploring, and interacting with the world through models characterizes science in all its branches and at all levels of education. Model-based reasoning is central to science education and thus science assessment. Current interest in developing and using models has increased with the release of the Next Generation Science Standards, which identified this as one of the eight practices of science and engineering. However, the interactive, complex, and often technology-based tasks that are needed to assess model-based reasoning in its fullest forms are difficult to develop. Building on research in assessment, science education, and learning science, this Brief describes a suite of design patterns that can help assessment designers, researchers, and teachers create tasks for assessing aspects of model-based reasoning: Model Formation, Model Use, Model Elaboration, Model Articulation, Model Evaluation, Model Revision, and Model-Based Inquiry. Each design pattern lays out considerations concerning targeted knowledge and ways of capturing and evaluating students’ work. These design patterns are available at http://design-drk.padi.sri.com/padi/do/NodeAction?state=listNodes&NODE_TYPE=PARADIGM_TYPE. The ideas are illustrated with examples from existing assessments and the research literature

    On the Roles of External Knowledge Representations in Assessment Design

    No full text
    People use external knowledge representations (EKRs) to identify, depict, transform, store, share, and archive information. Learning how to work with EKRs is central to be-coming proficient in virtually every discipline. As such, EKRs play central roles in cur-riculum, instruction, and assessment. Five key roles of EKRs in educational assessment are described: (1) An assessment is itself an EKR, which makes explicit the knowledge that is valued, ways it is used, and standards of good work. (2) The analysis of any domain in which learning is to be assessed must include the iden-tification and analysis of the EKRs in that domain. (3) Assessment tasks can be structured around the knowledge, relationships, and uses of domain EKRs. (4) "Design EKRs" can be created to organize knowledge about a domain in forms that support the design of assessment. (5) EKRs in the discipline of assessment design can guide and structure the domain analyses (#2), task construction (#3), and the creation and use of design EKRs (#4). The third and fourth roles are discussed and illustrated in greater detail, through the per-spective of an "evidence-centered" assessment design framework that reflects the fifth role. Connections with automated task construction and scoring are highlighted. Ideas are illustrated with two examples: "generate examples" tasks and simulation-based tasks for assessing computer network design and troubleshooting skills
    corecore