114 research outputs found
Recommended from our members
Evaluating parent comprehension of measurement error information presented in score reports
Individual-student score reports sometimes include information about precision of scores (i.e., measurement error). In this study, we specifically investigated if parents understand this information when presented. We conducted an online experimental study where 196 parents of middle school children, from various parts of the country, were randomly assigned to three conditions with different amounts of measurement error information. Parents in all conditions answered a series of comprehension questions about a studentâs performance on a hypothetical test. Results indicate that when information about error was presented, parents showed a significantly better understanding of score variability. Moreover, when asked about their preference for such information, parents across all three conditions indicated that they would like such information to be included in their childâs report. Results from this study highlight the importance of clear communication of technical information to stakeholders, particularly parents, who are a diverse stakeholder group
Improving Test Score Reporting: Perspectives From the ETS Score Reporting Conference
This volume includes 3 papers based on presentations at a workshop on communicating assessment information to particular audiences, held at Educational Testing Service (ETS) on November 4th, 2010, to explore some issues that influence score reports and new advances that contribute to the effectiveness of these reports. Jessica Hullman, Rebecca Rhodes, Fernando Rodriguez, and Priti Shah present the results of recent research on graph comprehension and data interpretation, especially the role of presentation format, the impact of prior quantitative literacy and domain knowledge, the tradeâoff between reducing cognitive load and increasing active processing of data, and the affective influence of graphical displays. Rebecca Zwick and Jeffrey Sklar present the results of the Instructional Tools in Educational Measurement and Statistics for School Personnel (ITEMS) project, funded by the National Science Foundation and conducted at the University of California, Santa Barbara to develop and evaluate 3 webâbased instructional modules intended to help educators interpret test scores. Zwick and Sklar discuss the modules and the procedures used to evaluate their effectiveness. Diego ZapataâRivera presents a new framework for designing and evaluating score reports, based on work on designing and evaluating score reports for particular audiences in the context of the CBAL (Cognitively Based Assessment of, for, and as Learning) project (Bennett & Gitomer, 2009), which has been applied in the development and evaluation of reports for various audiences including teachers, administrators and students.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/108325/1/ets202281.pd
Recommended from our members
Exploring the Use of Conversations with Virtual Agents in Assessment Contexts
Conversations with computer agents can be used to measure skills that may be difficult to accomplish using tra-ditional multiple-choice assessments. In order to achieve natural conversations in this form of assessment, we are exploringissues related to how test-takers interact with computer agents, such as what dialogue moves lead to interpretable responses,the influence of âcognitive characteristicsâ of computer agents, how should the system adapt to test-taker responses, and howthese interactions impact test-taker emotions and affect. In this presentation we will discuss our current research addressingthese questions, illustrating important dimensions that are involved with designing a conversation space and how each designdecision can impact multiple factors within assessment contexts
Radiative neutrino masses in the singlet-doublet fermion dark matter model with scalar singlets
ABSTRACT: In view of the lack of signals of new physics in strong production at the LHC, there is a growing interest in simplified models where the production of new particles is only through electroweak processes, with lesser constraints from LHC limits. In particular, there are simple standard model (SM) extensions with dark matter (DM) candidates, such as the singlet scalar dark matter (SSDM) model [1â3], or the singlet-doublet fermion dark matter (SDFDM) model [4â9]. In this kind of models, the prospects for signals at the LHC are in general limited because of the softness of final SM particles coming from the small charged to neutral mass gaps of the new particles, which is usually required to obtain the proper relic density. In this sense, the addition of new particles, motivated for example by neutrino physics, could open new detection possibilities, either through new decay channels or additional mixings which increase the mass gaps
Recommended from our members
Source Expertise and Question Type Effects in Conversation-Based Assessment
Conversational discourse is a cognitive and social process influenced by both discourse content and pragmaticfactors, such as the participantsâ prior knowledge; these factors may also affect how simulated conversations with virtual agentsunfold, with implications for design. This study explored effects of question content and perceived expertise of a virtual agenton studentsâ interactions with a conversation-based assessment (CBA) measuring science inquiry skills. Twenty-four middleschool students were randomly assigned to work with a High- or Low-Knowledge virtual peer to collect data and generateweather predictions. Students evaluated their own data relative to the peerâs; they could either âChooseâ which note to keep, orto âAgree/Disagreeâ with the peerâs suggested choice of note. Students rated the peer as more expert in the High-Knowledgecondition, but peer expertise did not affect performance. However, the Agree/Disagree condition improved studentsâ accuracyin their note choice, and yielded marginally higher pre-post learning gains
Caring assessments: challenges and opportunities
Caring assessments is an assessment design framework that considers the learner as a whole and can be used to design assessment opportunities that learners find engaging and appropriate for demonstrating what they know and can do. This framework considers learnersâ cognitive, meta-cognitive, intra-and inter-personal skills, aspects of the learning context, and cultural and linguistic backgrounds as ways to adapt assessments. Extending previous work on intelligent tutoring systems that âcareâ from the field of artificial intelligence in education (AIEd), this framework can inform research and development of personalized and socioculturally responsive assessments that support studentsâ needs. In this article, we (a) describe the caring assessment framework and its unique contributions to the field, (b) summarize current and emerging research on caring assessments related to studentsâ emotions, individual differences, and cultural contexts, and (c) discuss challenges and opportunities for future research on caring assessments in the service of developing and implementing personalized and socioculturally responsive interactive digital assessments
- âŠ