42,081 research outputs found

    Towards a competency model for adaptive assessment to support lifelong learning

    No full text
    Adaptive assessment provides efficient and personalised routes to establishing the proficiencies of learners. We can envisage a future in which learners are able to maintain and expose their competency profile to multiple services, throughout their life, which will use the competency information in the model to personalise assessment. Current competency standards tend to over simplify the representation of competency and the knowledge domain. This paper presents a competency model for evaluating learned capability by considering achieved competencies to support adaptive assessment for lifelong learning. This model provides a multidimensional view of competencies and provides for interoperability between systems as the learner progresses through life. The proposed competency model is being developed and implemented in the JISC-funded Placement Learning and Assessment Toolkit (mPLAT) project at the University of Southampton. This project which takes a Service-Oriented approach will contribute to the JISC community by adding mobile assessment tools to the E-framework

    The illusion of competency versus the desirability of expertise: Seeking a common standard for support professions in sport

    Get PDF
    In this paper we examine and challenge the competency-based models which currently dominate accreditation and development systems in sport support disciplines, largely the sciences and coaching. Through consideration of exemplar shortcomings, the limitations of competency-based systems are presented as failing to cater for the complexity of decision making and the need for proactive experimentation essential to effective practice. To provide a better fit with the challenges of the various disciplines in their work with performers, an alternative approach is presented which focuses on the promotion, evaluation and elaboration of expertise. Such an approach resonates with important characteristics of professions, whilst also providing for the essential ‘shades of grey’ inherent in work with human participants. Key differences between the approaches are considered through exemplars of evaluation processes. The expertise-focused method, although inherently more complex, is seen as offering a less ambiguous and more positive route, both through more accurate representation of essential professional competence and through facilitation of future growth in proficiency and evolution of expertise in practice. Examples from the literature are also presented, offering further support for the practicalities of this approach

    Knowledge representation and evaluation an ontology-based knowledge management approach

    Get PDF
    Competition between Higher Education Institutions is increasing at an alarming rate, while changes of the surrounding environment and demands of labour market are frequent and substantial. Universities must meet the requirements of both the national and European legislation environment. The Bologna Declaration aims at providing guidelines and solutions for these problems and challenges of European Higher Education. One of its main goals is the introduction of a common framework of transparent and comparable degrees that ensures the recognition of knowledge and qualifications of citizens all across the European Union. This paper will discuss a knowledge management approach that highlights the importance of such knowledge representation tools as ontologies. The discussed ontology-based model supports the creation of transparent curricula content (Educational Ontology) and the promotion of reliable knowledge testing (Adaptive Knowledge Testing System)

    Developing professional recognition of systems thinking in practice: an interim report

    Get PDF
    The interim report on developing a competency framework for systems thinking in practice (STiP) provides a step towards possibly developing professional recognition of STiP. The report provides feedback to initial co-respondents involved with phase 1 of this wider inquiry, and provides a platform to a wider audience for initiating a second phase of the inquiry. The phase 1 study had the following objectives: 1. To scope relevant examples of work aimed at giving professional recognition to systems thinking 2. To capture some perspectives on the challenges and opportunities facing the task of giving profession recognition to systems thinking. Phase 2 of the wider inquiry aims to firstly consolidate the findings from phase 1 but also to focus more on moves towards collaborative modelling of a STiP competency framework. The research is carried out by members of the Applied Systems Thinking in Practice (ASTiP) Group at The Open University (UK) with funding from OU eSTEeM (OU Centre for STEM Pedagogy). The research team for phase 1 comprised of Rupesh Shah (Associate Lecturer), who carried out the core research activities, in collaboration with Martin Reynolds (Senior Lecturer) who is overseeing both phases of the wider inquiry, including support for reporting on research outcomes. The findings reported in sections 4, 5 and 6 remain largely unrefined and in sketch (bullet) form at this interim stage of reporting. The interim report comprises a brief background to the wider inquiry before outlining the approach taken to the phase 1 study. The findings are reported in relation to each of the two study objectives. Three themes arising from the study as identified by Rupesh are then discussed. Finally, some concluding ideas are presented for taking forward the outcomes from this study towards a second phase of the inquiry

    Assessing collaborative learning: big data, analytics and university futures

    Get PDF
    Traditionally, assessment in higher education has focused on the performance of individual students. This focus has been a practical as well as an epistemic one: methods of assessment are constrained by the technology of the day, and in the past they required the completion by individuals under controlled conditions, of set-piece academic exercises. Recent advances in learning analytics, drawing upon vast sets of digitally-stored student activity data, open new practical and epistemic possibilities for assessment and carry the potential to transform higher education. It is becoming practicable to assess the individual and collective performance of team members working on complex projects that closely simulate the professional contexts that graduates will encounter. In addition to academic knowledge this authentic assessment can include a diverse range of personal qualities and dispositions that are key to the computer-supported cooperative working of professionals in the knowledge economy. This paper explores the implications of such opportunities for the purpose and practices of assessment in higher education, as universities adapt their institutional missions to address 21st Century needs. The paper concludes with a strong recommendation for university leaders to deploy analytics to support and evaluate the collaborative learning of students working in realistic contexts

    Transforming a competency model to assessment items

    No full text
    The problem of comparing and matching different learners’ knowledge arises when assessment systems use a one-dimensional numerical value to represent “knowledge level”. Such assessment systems may measure inconsistently because they estimate this level differently and inadequately. The multi-dimensional competency model called COMpetence-Based learner knowledge for personalized Assessment (COMBA) is being developed to represent a learner’s knowledge in a multi-dimensional vector space. The heart of this model is to treat knowledge, not as possession, but as a contextualized space of capability either actual or potential. The paper discusses the automatic generation of an assessment from the COMBA competency model as a “guide-on-the–side”

    The systemic implications of constructive alignment of higher education level learning outcomes and employer or professional body based competency frameworks

    Get PDF
    The past 50 years has seen the development of schemes in higher education, employment and professional work that either identify what people should know and/or what they should be able to do with what they have learned and experienced. Within higher education this is usually equated with the learning outcomes students are expected to achieve at the end of studying a course, module or qualification and increasingly the teaching, learning and assessment strategies of those courses, modules or qualifications are being designed to align with those learning outcomes. In employment, there has been the emergence of job and role specifications setting out the knowledge and skills required of incumbent and recruits alike. Where professional bodies confer (often statutorily recognised) status in employment sectors they also increasingly set out their expectations of members through competency frameworks. This paper explores the varied relationships between these three means of measuring knowledge and skills within people including the nature of the knowledge and skills being measured as well as the specificity of the knowledge and skills being measured, using the case study of environmental management in the UK. It then argues that there needs to be a more constructive alignment between these three forms of measurement, achieved through a dynamic conversation between all concerned, but also that such alignment needs both to recognise the importance of less tangible ‘systems thinking’ abilities alongside the more tangible ‘technical’ and ‘managerial’ abilities and that some abilities emerge from the trajectories of praxis and cannot readily be specified as an outcome in advance

    Mining web data for competency management

    Get PDF
    We present CORDER (COmmunity Relation Discovery by named Entity Recognition) an un-supervised machine learning algorithm that exploits named entity recognition and co-occurrence data to associate individuals in an organization with their expertise and associates. We discuss the problems associated with evaluating unsupervised learners and report our initial evaluation experiments
    • 

    corecore