26,701 research outputs found
Improving self- and peer assessment processes with technology
Purpose - As a way of focusing curriculum development and learning outcomes universities have introduced graduate attributes, which their students should develop during their degree course. Some of these attributes are discipline-specific, others are generic to all professions. The development of these attributes can be promoted by the careful use of self- and peer assessment. The authors have previously reported using the self- and peer assessment software tool SPARK in various contexts to facilitate opportunities to practise, develop, assess and provide feedback on these attributes. This research and that of the other developers identified the need to extend the features of SPARK, to increase its flexibility and capacity to provide feedback. This paper seeks to report the results of the initial trials to investigate the potential of these new features to improve learning outcomes. Design/methodology/approach - The paper reviews some of the key literature with regard to self- and peer assessment, discusses the main aspects of the original online self- and peer assessment tool SPARK and the new version SPARKPLUS, reports and analyses the results of a series of student surveys to investigate whether the new features and applications of the tool have improved the learning outcomes in a large multi-disciplinary Engineering Design subject. Findings - It was found that using self- and peer assessment in conjunction with collaborative peer learning activities increased the benefits to students and improved engagement. Furthermore it was found that the new features available in SPARKPLUS facilitated efficient implementation of additional self- and peer assessment processes (assessment of individual work and benchmarking exercises) and improved learning outcomes. The trials demonstrated that the tool assisted in improving students' engagement with and learning from peer learning exercises, the collection and distribution of feedback and helping them to identify their individual strengths and weaknesses. Practical implications: SPARKPLUS facilitates the efficient management of self- and peer assessment processes even in large classes, allowing assessments to be run multiple times a semester without an excessive burden for the coordinating academic. While SPARKPLUS has enormous potential to provide significant benefits to both students and academics, it is necessary to caution that, although a powerful tool, its successful use requires thoughtful and reflective application combined with good assessment design. Originality/value - It was found that the new features available in SPARKPLUS efficiently facilitated the development of new self- and peer assessment processes (assessment of individual work and benchmarking exercises) and improved learning outcomes. © Emerald Group Publishing Limited
Combining flipped instruction and multiple perspectives to develop cognitive and affective processes
Using self assessment to integrate graduate attribute development with discipline content delivery
Professionals, in addition to being technically competent, require skills of collaboration, communication and the ability to work in teams [1,2]. There is a reported competency gap between these skills required by employers and those developed by students during their undergraduate courses [3,4]. In response to this gap Universities have introduced graduate attributes which their students should develop during the course of their degree. Some of these attributes are discipline specific, others are generic to all professions. Generic attributes include teamwork skills, being able to think both critically and independently, being able to critically appraise their work and the work of others, and an appreciation of the need and value of critical reflection in one's academic, personal, and professional life. The development of all these attributes can be promoted by employing self and peer assessment. Thoughtful use provides opportunities to practise, develop, assess and provide feedback on these attributes and develop students' judgement [5] even within subjects where traditional discipline content is taught. Our research involves using two assessment metrics produced from confidential student self and peer evaluations. These metrics are shared between all group members in structured feedback sessions several times a semester. This allows students to identify their individual strengths and weaknesses and address any competency gaps in their development. These metrics also allow progress to be assessed not only within a single subject but throughout an entire degree program
Self and peer assessment: A necessary ingredient in developing and tracking students' graduate attributes
Recently there has been a shift to focus on assessing students' learning outcomes in terms of graduate attributes which they should develop and demonstrate during the course of their degree. A number of universities have tried to address these issues for example by using software tools such as ReView to track attribute development or by producing both academic and professional skill development transcripts. However, many attributes such as teamwork and the ability to give and receive feedback are typically practised in collaborative peer exercises. Furthermore these exercises are often conducted outside of regular class sessions, hence thorough assessment of these attributes should include input from both individual students and their peers. Hence we propose that any method of developing and tracking student's graduate attributes should include self and peer assessment. © 2009 Keith Willey & Anne Gardner
Investigating the capacity of self and peer assessment activities to engage students and promote learning
The authors have previously reported the effectiveness of using self and peer assessment to improve learning outcomes by providing opportunities to practise, assess and provide feedback on students' attribute development. Despite this work and the research of others, a significant number of students and, indeed, many academics focus on the free-rider deterrent capability of self and peer assessment, rather than its capacity to provide opportunities for developing judgement and facilitating reflection and feedback to complete the learning cycle. The advent of web-based tools such as SPARKPLUS allows the frequent and efficient implementation of self and peer assessment activities even in large classes. This article reports the results of an investigation into whether the regular use of self and peer assessment in different contexts promoted effective peer learning, increased engagement and encouraged students to learn. © 2010 SEFI
Developing team skills with self- and peer assessment: Are benefits inversely related to team function?
Purpose - Self- and peer assessment has proved effective in promoting the development of teamwork and other professional skills in undergraduate students. However, in previous research approximately 30 percent of students reported that its use produced no perceived improvement in their teamwork experience. It was hypothesised that a significant number of these students were probably members of a team that would have functioned well without self- and peer assessment and hence the process did not improve their teamwork experience. This paper aims to report the testing of this hypothesis. Design/methodology/approach - The paper reviews some of the literature on self- and peer assessment, outlines the online self- and peer assessment tool SPARKPLUS, and analyses the results of a post-subject survey of students in a large multi-disciplinary engineering design subject. Findings - It was found that students who were neutral as to whether self- and peer assessment improved their teamwork experience cannot be assumed to be members of well-functioning teams. Originality/value - To increase the benefits for all students it is recommended that self- and peer assessment focuses on collaborative peer learning, not just assessment of team contributions. Furthermore, it is recommended that feedback sessions be focused on learning not just assessment outcomes and graduate attribute development should be recorded and tracked by linking development to categories required for professional accreditation. © Emerald Group Publishing Limited
Mapping the landscape of engineering education research: An Australian perspective
The landscape model presented in this paper stimulated dialogue around the nature of topics and research in our community and allowed participants to find a place to belong. We argue that such a dialogue will help us identify, develop and grow our research domain and support those seeking to participate in or move within it. We propose a developmental model that combines the landscape with active pursuit of the characteristics exhibited in quality research. We found that one indication of progress of an emerging researcher on their developmental journey is their use of multiple perspectives, interpretations and dimensions in their research. We suggest that such a model would encourage improvements in quality of the studies in all areas of the landscape, rather than the perception that improvement can be achieved by adopting a specific approach or type of research. A practice versus research dichotomy is ultimately divisive and does little to assist researchers develop their expertise. We believe national conferences should provide a forum for all authors in an environment aimed at improving the quality of research, publications and the development of academics wherever they are on the landscape
Changing student's perceptions of self and peer assessment
The authors have previously reported the effectiveness of using self and peer assessment to improve learning outcomes by providing opportunities to practise, assess and provide feedback on students' learning and development. Despite this work and the research of others, we found a significant number of students perceive self and peer assessment to be an instrument to facilitate fairness, focusing on its free-rider deterrent capacity, rather than providing opportunities for reflection and feedback to complete the learning cycle. We assumed that these perceptions were enforced by the fact that the main use of self and peer assessment was to moderate marks and provide feedback to individuals on their contribution to team tasks. We hypothesised that these perceptions would change if students were provided with opportunities to use self and peer assessment for different purposes. In this paper we report testing this hypothesis by using self and peer assessment multiple times a semester to not only assess team contributions but to assess individual student assignments and in benchmarking exercises. Our aim was to test whether this approach would assist students to gain more benefit from self and peer assessment processes while simultaneously breaking down their narrow focus on fairness. © 2009 Keith Willey & Anne Gardner
Does pre-feedback self reflection improve student engagement, learning outcomes and tutor facilitation of group feedback sessions?
The authors have previously reported the effectiveness of using self and peer assessment to improve learning outcomes by providing opportunities to practise, assess and provide feedback on students' learning and development. Despite this work and the research of others, we observed some students felt they had nothing to learn from feedback sessions. Hence they missed the opportunity for reflection and to receive feedback to complete the learning cycle. This behaviour suggested that students needed more guidance to facilitate deeper engagement. We hypothesised that student engagement would increase if they were provided with guiding 'feedback catalyst questions' to initiate reflection and facilitate effective feedback on learning outcomes. In this paper we report testing whether this approach assisted students to gain more benefit from the self and peer assessment feedback sessions. In our investigation both students and tutors were asked to evaluate the effectiveness of the feedback catalyst questions in improving student engagement and learning. We found that the pre-feedback self reflection exercise improved learning outcomes and student engagement with more than 80% of students reporting multiple benefits. Furthermore tutors reported that the exercise assisted them to facilitate their sessions. However, not surprisingly the degree of success was related in part to the attitude of the tutor to the exercise. This suggests that while the feedback catalyst questions were extremely effective there is no substitute for enthusiastic and engaging tutorial staff. © 2010 Gardner & Willey
Authors' perceptions of peer review of conference papers and how they characterise a 'good' one
This paper examines the individual's experience of the peer review process to explore implications for the wider engineering education research community. A thematic analysis of interview transcripts showed that providing feedback to authors in reviews was mentioned equally as frequently as the role of quality assurance of the conference papers. We used responses from participants from various levels of expertise and types of universities to identify what were for them the elements of a quality conference paper and a quality review. For a conference paper these included that it should be relevant, situate itself relative to existing literature, state the purpose of the research, describe sound methodology used with a logically developed argument, have conclusions supported by evidence and use language of a professional standard. A quality review should start on a positive note, suggest additional literature, critique the methodology and written expression and unambiguously explain what the reviewer means. The lists of characteristics of a good paper and a good review share elements such as attention to relevant literature and methodology. There is also substantial overlap between how our participants characterise quality papers and reviews and the review criteria used for the AAEE conference, and for such publication outlets as the European Journal for Engineering Education (EJEE) and the Journal of Engineering Education (JEE). This suggests some level of agreement in the community about the elements that indicate quality. However, we need to continue discussions about what we mean by 'sound' methodology and 'good' evidence as well as establishing some shared language and understanding of the standards required in regard to the review criteria. The results of this study represent the first steps in improving our shared understandings of what constitutes quality research in engineering education for our community, and how we might better convey that in offering constructive advice to authors when writing a review of a conference paper. Since the peer review process has implications for the development of individual researchers in the field and hence for the field overall, it seems reasonable to ask reviewers to pay attention to how they write reviews so that they create the potential for engineering academics to successfully transition into this different research paradigm
- …