199,939 research outputs found
Recommended from our members
The Challenge of Assessing Reflection: The Open University's Access Programme
Recommended from our members
Scoping a vision for formative e-assessment: a project report for JISC
Assessment is an integral part of teaching and learning. If the relationship between teaching and learning were causal, i. e. if students always mastered the intended learning outcomes of a particular sequence of instruction, assessment would be superfluous. Experience and research suggest this is not the case: what is learnt can often be quite different from what is taught. Formative assessment is motivated by a concern with the elicitation of relevant information about student understanding and / or achievement, its interpretation and an exploration of how it can lead to actions that result in better learning. In the context of a policy drive towards technology-enhanced approaches to teaching and learning, the question of the role of digital technologies is key and it is the latter on which this project particularly focuses. The project and its deliverables have been informed by recent and relevant literature, in particular recent work by Black andIn this work, they put forward a framework which suggests that assessment for learning their term for formative assessment can be conceptualised as consisting of a number of aspects and five keystrategies. The key aspects revolve around the where the learner is going, where the learner is right now and how she can get there and examines the role played by the teacher, peers and the learner. Language: English Keywords: assessments, case studies, design patterns, e-assessmen
A comparison of integrated testlet and constructed-response question formats
Constructed-response (CR) questions are a mainstay of introductory physics
textbooks and exams. However, because of time, cost, and scoring reliability
constraints associated with this format, CR questions are being increasingly
replaced by multiple-choice (MC) questions in formal exams. The integrated
testlet (IT) is a recently-developed question structure designed to provide a
proxy of the pedagogical advantages of CR questions while procedurally
functioning as set of MC questions. ITs utilize an answer-until-correct
response format that provides immediate confirmatory or corrective feedback,
and they thus allow not only for the granting of partial credit in cases of
initially incorrect reasoning, but furthermore the ability to build cumulative
question structures. Here, we report on a study that directly compares the
functionality of ITs and CR questions in introductory physics exams. To do
this, CR questions were converted to concept-equivalent ITs, and both sets of
questions were deployed in midterm and final exams. We find that both question
types provide adequate discrimination between stronger and weaker students,
with CR questions discriminating slightly better than the ITs. Meanwhile, an
analysis of inter-rater scoring of the CR questions raises serious concerns
about the reliability of the granting of partial credit when this traditional
assessment technique is used in a realistic (but non optimized) setting.
Furthermore, we show evidence that partial credit is granted in a valid manner
in the ITs. Thus, together with consideration of the vastly reduced costs of
administering IT-based examinations compared to CR-based examinations, our
findings indicate that ITs are viable replacements for CR questions in formal
examinations where it is desirable to both assess concept integration and to
reward partial knowledge, while efficiently scoring examinations.Comment: 14 pages, 3 figures, with appendix. Accepted for publication in
PRST-PER (August 2014
Recommended from our members
Learning design approaches for personalised and non-personalised e-learling systems
Recognizing the powerful role that technology plays in the lives of people, researchers are increasingly focusing on the most effective uses of technology to support learning and teaching. Technology enhanced learning (TEL) has the potential to support and transform students’ learning and allows them to choose when, where and how to learn. This paper describes two different approaches for the design of personalised and non-personalised online learning
environments, which have been developed to investigate whether personalised e-learning is more efficient than non-personalised e-learning, and discuss some of the student’s experiences and assessment test results based on experiments conducted so far
Context-aware Assessment Using QR-codes
In this paper we present the implementation of a general mechanism to deliver tests based on mobile devices and matrix codes. The system is an extension of Siette, and has not been specifically developed for any subject matter. To evaluate the performance of the system and show some of its capabilities, we have developed a test for a second-year college course on Botany at the School of Forestry Engineering. Students were equipped with iPads and took an outdoor test on plant species identification. All students were able to take and complete the test in a reasonable time. Opinions expressed anonymously by the students in a survey about the usability of the system and the usefulness of the test were very favorable. We think that the application presented in this paper can broaden the applicability of automatic assessment techniques.The presentation of this work has been co-founded by the Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
Recommended from our members
A Construct-Modeling Approach to Develop a Learning Progression of how Students Understand the Structure of Matter
This paper builds on the current literature base about learning progressions in science to address the question, “What is the nature of the learning progression in the content domain of the structure of matter?” We introduce a learning progression in response to that question and illustrate a methodology, the Construct Modeling (Wilson, 2005) approach, for investigating the progression through a developmentally based iterative process. This study puts forth a progression of how students understand the structure of matter by empirically inter-relating constructs of different levels of sophistication using a sample of 1,087 middle grade students from a large diverse public school district in the western part of the United States. The study also shows that student thinking can be more complex than hypothesized as in the case of our discovery of a substructure of understanding in a single construct within the larger progression. Data were analyzed using a multidimensional Rasch model. Implications for teaching and learning are discussed—we suggest that the teacher’s choice of instructional approach needs to be fashioned in terms of a model, grounded in evidence, of the paths through which learning might best proceed, working toward the desired targets by a pedagogy which also cultivates students’ development as effective learners. This research sheds light on the need for assessment methods to be used as guides for formative work and as tools to ensure the learning goals have been achieved at the end of the learning period. The development and investigation of a learning progression of how students understand the structure of matter using the Construct Modeling approach makes an important contribution to the research on learning progressions and serves as a guide to the planning and implementation in the teaching of this topic. # 2017 Wiley Periodicals, Inc. J Res Sci Teach 54: 1024–1048, 201
The role of pedagogical tools in active learning: a case for sense-making
Evidence from the research literature indicates that both audience response
systems (ARS) and guided inquiry worksheets (GIW) can lead to greater student
engagement, learning, and equity in the STEM classroom. We compare the use of
these two tools in large enrollment STEM courses delivered in different
contexts, one in biology and one in engineering. The instructors studied
utilized each of the active learning tools differently. In the biology course,
ARS questions were used mainly to check in with students and assess if they
were correctly interpreting and understanding worksheet questions. The
engineering course presented ARS questions that afforded students the
opportunity to apply learned concepts to new scenarios towards improving
students conceptual understanding. In the biology course, the GIWs were
primarily used in stand-alone activities, and most of the information necessary
for students to answer the questions was contained within the worksheet in a
context that aligned with a disciplinary model. In the engineering course, the
instructor intended for students to reference their lecture notes and rely on
their conceptual knowledge of fundamental principles from the previous ARS
class session in order to successfully answer the GIW questions. However, while
their specific implementation structures and practices differed, both
instructors used these tools to build towards the same basic disciplinary
thinking and sense-making processes of conceptual reasoning, quantitative
reasoning, and metacognitive thinking.Comment: 20 pages, 5 figure
- …