This is an account of the issues and findings of a project, funded by the Centre for Excellence in Preparing for Academic Practice, which has been enquiring into evaluation of development provision for postgraduate and early career researchers and academics with respect to participant learning and attainment, asking to what extent can assessment of participants be deployed in development events, especially those which do not lead to an award? \ud \ud At the most general level, assessment within courses is one criterion for evaluation of courses and programmes. Students and other participants in courses and other learning activities learn more satisfactorily if the purpose of the activity, what and how they need to learn, is clear for them. Assessment, aligned with the aims and objectives and learning outcomes of the activity, reinforces the direction that learning can and should take (c.f. Biggs 2003, p141). In accredited programmes which lead to an award, such as higher education qualifications, assessment facilitates measures of student attainment, ultimately in pass/fail rates and results as gradations. For instance, assuming that the programmes in question are not deploying norm-referenced assessment, we may judge the effectiveness of our PGCert programmes on Learning & Teaching / Higher Education partly in terms of the proportion of students who complete the programme successfully. The competence of doctoral supervisors, and the quality of departmental and institutional research study programmes, is judged in part according to rates of submission and successful completion. There is a diverse range of non-disciplinary-specific researcher support provision in the UK higher education sector, for staff and for students, such as workshops supported by the Roberts Agenda money, development events not leading to an award running alongside any other award-bearing courses, but where there is a presumption that learning will result. Notwithstanding the multitude of other methods of evaluating their usefulness, the question remains how we can gauge research student learning and attainment through those development events: “[…]how are generic research skills to be judged[…]?” (Leonard 2001, p239), i.e. if they do not themselves enjoy assessment outcomes as a measure of evaluation. Assessment is not the norm (albeit with notable exceptions) in such development events for staff or students but the suggestion has been made (Gough & Denicolo 2007, pp5-6 pp24-27; following Kent & Guzkowska 1998) that it does have a place in initial and continuing professional development for both groups. Assessment for encouraging self-knowledge and personal development is recommended by some (c.f. Light & Cox 2001, p187) for HE students generally, with schemas articulated at least for more general “transferable” skills and for progress files (Dickinson 2000; James 2000; Drew, Thorpe & Bannister 2002; Toohey 2002; Jackson & Ward 2004). There is a suggestion of “a strong link between students' perception of the importance of skills and the degree to which the skills are assessed” (Leggett et al. 2004, abstract). It is fair to say that assessing relatively non-discipline-specific components is easier to integrate into comprehensively assessed discipline-based undergraduate degree programmes. We may learn from other educational levels about achieving this in non-award-bearing programmes (c.f. Greenwood & Wilson 2004). However, the question has been posed, for instance within The Rugby Team, whether assessment of the doctorate itself should take into account development of generic/transferable skills (Park 2007, p32). The project, through exploratory methodology and mixed method, has been enquiring into evaluation of development events (of all types) for early career academics (ECAs, which includes research students), with respect to participant learning and attainment (impact levels 2 and 3 of the Rugby Team Impact Framework, expounded in Bromley, after Kirkpatrick), asking the question to what extent can assessment of participants be deployed in development events, especially those which do not lead to an award? The fieldwork, group discussions and individual interviewing and questionnaires, has focused upon the experiences of participants in a diverse range of development events, including some staged to include additional activities and assignments for assessment. The project finds that participating is seen as valuable in its own right for learning, assessment being seen as unnecessary by some, and grading offensive by those evoking a ‘romantic’ narrative. Other participants, especially those in development events which are not normally complemented by an assignment task, are largely open variously to assessment that is for learning and of learning. A conclusion is that the tail of summative assessment, where that is defined according to the needs of accredited and award-bearing provision, does not wag the dog of learning through committed purposive participation. At the same time, there is a strong indication that the selective integration of tasks into development provision, as means by which the performance and competence of participants may be assessed, would constitute a fruitful investment of resources, and a globally disciplinary enhancement. \u
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.