32 research outputs found

    A preliminary evaluation of using WebPA for online peer assessment of collaborative performance by groups of online distance learners

    Get PDF
    Collaborative assessment has well-recognised benefits in higher education and, in online distance learning, this type of assessment may be integral to collaborative e-learning and may have a strong influence on the student’s relationship with learning. While there are known benefits associated with collaborative assessment, the main drawback is that students perceive that their individual contribution to the assessment is not recognised. Several methods can be used to overcome this; for example, something as simple as the teacher evaluating an individual’s contribution. However, teacher assessment can be deemed as unreliable by students, since the majority of group work is not usually done in the presence of the teacher (Loddington, Pond, Wilkinson, & Wilmot, 2009). Therefore, students’ assessment of performance/contribution of themselves and their peer group in relation to the assessment task, also known as peer moderation, can be a more suitable alternative. There are a number of tools that can be used to facilitate peer moderation online, such as WebPA, which is a free, open source, online peer assessment tool developed by Loughborough University. This paper is a preliminary evaluation of online peer assessment of collaborative work undertaken by groups of students studying online at a distance at a large UK university, where WebPA was used to facilitate this process. Students’ feedback on the use of WebPA was mixed, although most of the students found the software easy to use, with few technical issues and the majority reported that they would be happy to use this again. The authors reported WebPA as a beneficial peer assessment tool

    Evolution in the Design and Functionality of Rubrics: from “Square” Rubrics to “Federated” Rubrics

    Get PDF
    The assessment of learning remains one of the most controversial and challenging aspects for teachers. Among some recent technical solutions, methods and techniques like eRubrics emerge in an attempt to solve the situation. Understanding that all teaching contexts are different and there can be no single solution for all cases, specific measures are adapted to contexts where teachers receive support from institutions and communities of practice. This paper presents the evolution of the eRubric service [1] which started from a first experience with paper rubrics, and, with time and after several I+D+R [2] educational projects, has evolved thanks to the support of a community of practice [3] and the exchange of experiences between teachers and researchers. This paper shows the results and functionality of the eRubrics service up to the date of publicationa.) Project I+D+i EDU2010-15432: eRubric federated service for assessing university learning http://erubrica.uma.es/?page_id=434. b.) Centre for the Design of eRubrics. National Distance Education System -Sined- Mexico. [http://erubrica.uma.es/?page_id=389

    Peer Review in the Classroom: The Benefits of Free Selection in a Time-Restricted Setting

    Get PDF
    This study examines the potential of a peer review approach in the time-restricted setting of a class session. In the free selection setting we explored, students had access to all peer work and they were allowed to select which work they want to read and review. The study was conducted during the 8th week of the course, right after students’ first deliverable. A total of 18 Master students were asked to provide structured feedback to their peers, using a review template. In the 2-hour period of the peer review activity, students had to review two peer deliverables: one that was randomly assigned to them and one they could choose freely from the remaining set. Result analysis showed that while half of the students followed a minimum effort strategy, reading and reviewing only two peer deliverables, the other half read several deliverables before deciding which one to review. We maintain that reviewing peer work can be beneficial for the students, offering to them multiple perspectives (i.e., those of the reviewees). As such, the suggested approach could be proven more beneficial for the students, than the widely applied paired approach, in which two students review each other’s work. The study also examines the criteria students use for selecting which peer work to review and comments on the limited overhead opposed to the teacher, making the method a useful and efficient instructional tool

    Peer feedback:moving from assessment of learning to assessment for learning

    Get PDF
    The following case study showcases a model of academic peer learning that has demonstrated clear links to learning development in final year students. The paper discusses the introduction of a new assessment structure for a 15 credit course unit usually taken by 90 students in the form of a short discursive essay (500 words) due early in the first teaching term. This essay is peer assessed in groups of four students. The peer feedback and tutor mark then serve as formative feedback (feed forward) for the main, longer essay (2-2,500 words) due in the second teaching term. Following the outline of the case background, details of the peer assessment are provided, including its development and structure. The new assessment structure has resulted in deeper learning, positive student feedback, fewer student complaints with regard to grades received for the main essay, and better preparation for the final exam. Reflections are offered in the conclusion. Ã

    Una Experiencia Internacional con eRúbricas: una aproximación a la evaluación formativa en dos cursos en la carrera de Educación Infantil.

    Get PDF
    [EN] The present project aims at assessing the eRubric tool [1, 2] within a teacher education programme in Early Childhood Education. The eRubric is a tool and a method for teacher training and educational assessment. eRubrics can create collaborative learning environments, raise awareness in students about their own learning process and promote active participation in class in order to ensure learning quality. The planned methods were interwoven with the tool and with the control groups that were involved in the institutional platform at the University of Stockholm. In order to provide an international angle, the university is activelycooperating with the University of Malaga through the Gtea [3] group. The project started during the academic year 2012 and was divided into three stages: Implementation, development and evaluation. According to the teachers, the obtained results prove that eRubrics offer educational benefits in terms of competences and proof of learning, students’ active participation in their tasks and peer feedback; even though teachers still show some reluctance in implementing this tool and methodology. On the other hand, students also undertake a process of reflection and collaborative learning and obtain positive results, while experiencing nonetheless some difficulties and limitations.[ES] El proyecto que presentamos consistió en una experimentación y evaluación de la herramienta eRubrica[1,2] dentro del programa de formación de docentes de Educación Infantil. eRubrica es una herramienta pero tambien un método para formación didáctica y evaluación formativa. Como principios posee la creación de ambientes de aprendizaje colaborativo, facilitar la consciencia en los estudiantes sobre su propio proceso de aprendizaje y la participación activa en los cursos para asegurar su calidad. Los métodos planteados se entrelazaron con la herramienta y los grupos de control inmersos en la plataforma institucional de la universidad de Estocolmo. En una perspectiva internacional se coopera activamente con la universidad de Málaga a través del grupo Gtea[3]. El proyecto se inicia en el curso año académico 2012 y se planificó en tres fases: Implementación, desarrollo y evaluación. Los resultados muestran ventajas pedagógicas de la herramienta en lo que respecta a la reflexión del profesor sobre las competencias y evidencias de aprendizaje, la participación activa del estudiante en sus tareas y el feedback con sus compañeros; al tiempo que se vislumbran resistencias en el docente a la hora de implementar e innovar con la herramienta y su metodología. Por su parte, los estudiantes también realizan un proceso de reflexión y aprendizaje colaborativo obteniendo resultados positivos, mostrando dificultades y limitaciones.Bergman, ME. (2014). An International Experiment with eRubrics: An Approach to Educational Assessment in Two Courses of the Early Childhood Education Degree. REDU. Revista de Docencia Universitaria. 12(1):99-110. https://doi.org/10.4995/redu.2014.6409OJS99110121Brown, S. and Glaser, A. (2003). Evaluar en la universidad. Problemas y nuevos enfoques. Madrid: Narcea.Brown, Ph. (2001). High skills: Globalization, competitiveness, and skill formation. Oxford: University Press.Cebrián de la Serna, M. (2009). Formative and Peer-to-Peer Evaluation Using a Rubric Tool. Edited by A. Méndez-Vilas, A. Solano Martín, J.A. Mesa González and J. Mesa González. Research, Reflections and Innovations in Integrating ICT in Education. pp.60-64. Published by FORMATEX.Falchikov, N. and Goldfinch, J. (2000). Student Peer Assessment in Higher Education: A Meta-Analysis Comparing Peer and Teacher Marks. Review of Educational Research, Vol. 70, No. 3, pp. 287-322.Fullan, M. (2011). Whole System Reform for Innovative Teaching and Learning. In Langworthy, M. (2011). Innovative Teaching and Learning Research: Findings and Implications. pp. 32-40. SRI Internacional.Hamrahan,S. & Isaacs, G. (2001). Assessing Self- and Peer-Assessment: The Students' Views. Higher Education Research & Development. 20: 1. pp. 53-70.López Pastor, V. (2009). Evaluación Formativa y Compartida en educación superior. Propuestas, técnicas, instrumentos y experiencias. Madrid: Narcea.Luxton-Reilly, A. (2009). A Systematic Review of Tools that Support Peer Assessment. Computer Science Education. Vol. 19, No. 4, pp. 209-232.Orsmond, P., Merry, S. & Reiling, K. (2000). The Use of Student Derived Marking Criteria in Peer and Self Assessment. Assessment and Evaluation in Higher Education. 2000; 25-1, pp. 23-38.Donal, H. (2012). La enseñanza universitaria en la era digital. Barcelona: Octaedro-EUB.Serrano Angulo, J. and Cebrián de la Serna, M. (2011). Study of the Impact on Student Learning Using the eRubric Tool and Peer Assessment. In A. Méndez- Vilas (Eds.). Education in a Technological World: Communicating Current and Emerging Research and Technological Efforts. Edit Formatex Research Center. http://www.for matex.info/ict/book/421-427.pdfTejada, J. (2012). Evaluación de competencias en educación superior: retos e implicaciones. Conferencia inaugural 2º Congreso internacional sobre eRúbrica para la evaluación de los aprendizajes. Celebrado los días 24-26 de octubre 2012. Universidad de Málaga

    Probing the Landscape: Toward a Systematic Taxonomy of Online Peer Assessment Systems in Education

    Get PDF
    We present the research framework for a taxonomy of online educational peer-assessment systems. This framework enables researchers in technology-supported peer assessment to understand the current landscape of technologies supporting student peer review and assessment, specifically, its affordances and constraints. The framework helps identify the major themes in existing and potential research and formulate an agenda for future studies. It also informs educators and system design practitioners about use cases and design options

    A preliminary evaluation of using WebPA for online peer assessment of collaborative performance by groups of online distance learners

    Get PDF
    Collaborative assessment has well-recognised benefits in higher education and, in online distance learning, this type of assessment may be integral to collaborative e-learning and may have a strong influence on the student’s relationship with learning. While there are known benefits associated with collaborative assessment, the main drawback is that students perceive that their individual contribution to the assessment is not recognised. Several methods can be used to overcome this; for example, something as simple as the teacher evaluating an individual’s contribution. However, teacher assessment can be deemed as unreliable by students, since the majority of group work is not usually done in the presence of the teacher (Loddington, Pond, Wilkinson, & Wilmot, 2009). Therefore, students’ assessment of performance/contribution of themselves and their peer group in relation to the assessment task, also known as peer moderation, can be a more suitable alternative. There are a number of tools that can be used to facilitate peer moderation online, such as WebPA, which is a free, open source, online peer assessment tool developed by Loughborough University. This paper is a preliminary evaluation of online peer assessment of collaborative work undertaken by groups of students studying online at a distance at a large UK university, where WebPA was used to facilitate this process. Students’ feedback on the use of WebPA was mixed, although most of the students found the software easy to use, with few technical issues and the majority reported that they would be happy to use this again. The authors reported WebPA as a beneficial peer assessment tool

    Evolución en el diseño y funcionalidad de las rúbricas: desde las rúbricas “cuadradas” a las erúbricas federadas

    Get PDF
    La evaluación de los aprendizajes sigue siendo uno de los elementos más controvertidos y difíciles para los docentes. Entre algunas soluciones recientes, surgen metodológicas y técnicas como las erúbricas que pretenden ayudar a resolver esta situación, a sabiendas de que los contextos de enseñanza son diferentes, por lo que no cabe una única solución para todos los casos, sino medidas específicas y adaptadas a los contextos donde los docentes se ayudan desde el apoyo institucional y las comunidades de prácticas. El presente trabajo expone la evolución de un servicio de erúbricas[1] que partió desde la experiencia de diversos proyectos de innovación educativa primero, y proyectos de I+D+i[2] más tarde, que ha evolucionado con el apoyo de una comunidad de prácticas [3] y el intercambio de experiencias entre docentes e investigadores. En este artículo se muestran los resultados y funcionalidades de este servicio logrados hasta el momento de su publicación.Ministerio de Ciencia e Innovación. Proyecto del Plan Nacional I+D+i 2010-2013. EDU2010-15432. Resolución de 30 de diciembre de 2009 (BOE de 31 de diciembre). Fondos Feder (CE

    Technology-Enhanced Peer Review: Benefits and Implications of Providing Multiple Reviews

    Get PDF
    ABSTRACT This study analyses the impact of self and peer feedback in technology-enhanced peer review settings. The impact of receiving peer comments (“receiver” perspective) is compared to that of reaching own insights by reviewing others’ work (“giver” perspective). In this study, 38 sophomore students were randomly assigned in two conditions and engaged in peer review activity facilitated by a web-based learning environment asking them to provide multiple reviews. In the Peer Reviewed (PR) condition students both reviewed peer work and received peer comments for their own work. By contrast, in the Self Reviewed (SR) condition students provided peer reviews, but did not receive any. Instead, they were asked to perform self reviewing, before proceeding to any revisions of their work. Result showed that the two groups were comparable in all aspects, suggesting that the lack of getting peer reviews can be efficiently alleviated by other type of scaffolds such as a scripted self review process. Overall, the study provides evidence that the review “giver” perspective (as opposed to the typical “receiver” perspective) is a vital option and has noteworthy implications for the design of technological systems that aim to flexibly support more efficient peer review schemes
    corecore