3,126 research outputs found

    The validity of collaborative assessment for learning

    Get PDF
    This article explores the features relating to the validity of assessment for learning, in particular the features of a collaborative assessment for learning, because of the learning benefits associated with collaborative learning. The article indicates what some of the learning benefits of highly valid collaborative assessment for learning might be, assuming that a valid assessment for learning actually promotes learning. It explores the idea that, for an assessment for learning to be valid, its learning outcomes must be socially appropriate for learners of the twenty-first century. The article illustrates some of these conceptual points, using descriptions of three collaborative assessments for learning currently being practised. Two of the illustrations are taken from the UK and one from the Eastern Caribbean

    Collaborative assessment of information provider's reliability and expertise using subjective logic

    Get PDF
    Q&A social media have gained a lot of attention during the recent years. People rely on these sites to obtain information due to a number of advantages they offer as compared to conventional sources of knowledge (e.g., asynchronous and convenient access). However, for the same question one may find highly contradicting answers, causing an ambiguity with respect to the correct information. This can be attributed to the presence of unreliable and/or non-expert users. These two attributes (reliability and expertise) significantly affect the quality of the answer/information provided. We present a novel approach for estimating these user's characteristics relying on human cognitive traits. In brief, we propose each user to monitor the activity of her peers (on the basis of responses to questions asked by her) and observe their compliance with predefined cognitive models. These observations lead to local assessments that can be further fused to obtain a reliability and expertise consensus for every other user in the social network (SN). For the aggregation part we use subjective logic. To the best of our knowledge this is the first study of this kind in the context of Q&A SN. Our proposed approach is highly distributed; each user can individually estimate the expertise and the reliability of her peers using her direct interactions with them and our framework. The online SN (OSN), which can be considered as a distributed database, performs continuous data aggregation for users expertise and reliability assessment in order to reach a consensus. We emulate a Q&A SN to examine various performance aspects of our algorithm (e.g., convergence time, responsiveness etc.). Our evaluations indicate that it can accurately assess the reliability and the expertise of a user with a small number of samples and can successfully react to the latter's behavior change, provided that the cognitive traits hold in practice. © 2011 ICST

    Developing and enhancing biodiversity monitoring programmes: a collaborative assessment of priorities

    Get PDF
    1.Biodiversity is changing at unprecedented rates, and it is increasingly important that these changes are quantified through monitoring programmes. Previous recommendations for developing or enhancing these programmes focus either on the end goals, that is the intended use of the data, or on how these goals are achieved, for example through volunteer involvement in citizen science, but not both. These recommendations are rarely prioritized. 2.We used a collaborative approach, involving 52 experts in biodiversity monitoring in the UK, to develop a list of attributes of relevance to any biodiversity monitoring programme and to order these attributes by their priority. We also ranked the attributes according to their importance in monitoring biodiversity in the UK. Experts involved included data users, funders, programme organizers and participants in data collection. They covered expertise in a wide range of taxa. 3.We developed a final list of 25 attributes of biodiversity monitoring schemes, ordered from the most elemental (those essential for monitoring schemes; e.g. articulate the objectives and gain sufficient participants) to the most aspirational (e.g. electronic data capture in the field, reporting change annually). This ordered list is a practical framework which can be used to support the development of monitoring programmes. 4.People's ranking of attributes revealed a difference between those who considered attributes with benefits to end users to be most important (e.g. people from governmental organizations) and those who considered attributes with greatest benefit to participants to be most important (e.g. people involved with volunteer biological recording schemes). This reveals a distinction between focussing on aims and the pragmatism in achieving those aims. 5.Synthesis and applications. The ordered list of attributes developed in this study will assist in prioritizing resources to develop biodiversity monitoring programmes (including citizen science). The potential conflict between end users of data and participants in data collection that we discovered should be addressed by involving the diversity of stakeholders at all stages of programme development. This will maximize the chance of successfully achieving the goals of biodiversity monitoring programmes

    Scalable human-computer collaborative assessment

    Get PDF
    Human-computer collaborative assessment (HCCA) is an approach to eassessment which emphasises the role of the human expert in making judgements. This approach is embodied in the Assessment21 software; for instance we take a very conservative approach to automatic marking, but provide flexible tools to aid the human marker

    A preliminary evaluation of using WebPA for online peer assessment of collaborative performance by groups of online distance learners

    Get PDF
    Collaborative assessment has well-recognised benefits in higher education and, in online distance learning, this type of assessment may be integral to collaborative e-learning and may have a strong influence on the student’s relationship with learning. While there are known benefits associated with collaborative assessment, the main drawback is that students perceive that their individual contribution to the assessment is not recognised. Several methods can be used to overcome this; for example, something as simple as the teacher evaluating an individual’s contribution. However, teacher assessment can be deemed as unreliable by students, since the majority of group work is not usually done in the presence of the teacher (Loddington, Pond, Wilkinson, & Wilmot, 2009). Therefore, students’ assessment of performance/contribution of themselves and their peer group in relation to the assessment task, also known as peer moderation, can be a more suitable alternative. There are a number of tools that can be used to facilitate peer moderation online, such as WebPA, which is a free, open source, online peer assessment tool developed by Loughborough University. This paper is a preliminary evaluation of online peer assessment of collaborative work undertaken by groups of students studying online at a distance at a large UK university, where WebPA was used to facilitate this process. Students’ feedback on the use of WebPA was mixed, although most of the students found the software easy to use, with few technical issues and the majority reported that they would be happy to use this again. The authors reported WebPA as a beneficial peer assessment tool

    The validity of collaborative assessment for learning

    Get PDF
    This article explores the features relating to the validity of assessment for learning, in particular the features of a collaborative assessment for learning, because of the learning benefits associated with collaborative learning. The article indicates what some of the learning benefits of highly valid collaborative assessment for learning might be, assuming that a valid assessment for learning actually promotes learning. It explores the idea that, for an assessment for learning to be valid, its learning outcomes must be socially appropriate for learners of the twenty-first century. The article illustrates some of these conceptual points, using descriptions of three collaborative assessments for learning currently being practised. Two of the illustrations are taken from the UK and one from the Eastern Caribbean

    Diagram matching for human-computer collaborative assessment

    Get PDF
    Diagrams are an important part of many assessments. When diagrams consisting of boxes joined by connectors are drawn on a computer, the resulting structures can be matched against each other to determine similarity. This paper discusses ways of doing such matching, and its application in the context of human-computer collaborative assessment. Results show that a simple heuristic process is effective in finding similarities in such diagrams. The practical usefulness of this varies in different contexts, as students often produce remarkably dissimilar diagrams
    • …
    corecore