4 research outputs found

    Towards a quantitative evaluation of the relationship between the domain knowledge and the ability to assess peer work

    Get PDF
    In this work we present the preliminary results provided by the statistical modeling of the cognitive relationship between the knowledge about a topic a the ability to assess peer achievements on the same topic. Our starting point is Bloom's taxonomy of educational objectives in the cognitive domain, and our outcomes confirm the hypothesized ranking. A further consideration that can be derived is that meta-cognitive abilities (e.g., assessment) require deeper domain knowledge

    Effects of network topology on the OpenAnswer’s Bayesian model of peer assessment

    Get PDF
    The paper investigates if and how the topology of the peer assessment network can affect the performance of the Bayesian model adopted in Ope nAnswer. Performance is evaluated in terms of the comparison of predicted grades with actual teacher’s grades. The global network is built by interconnecting smaller subnetworks, one for each student, where intra subnetwork nodes represent student's characteristics, and peer assessment assignments make up inter subnetwork connections and determine evidence propagation. A possible subset of teacher graded answers is dynamically determined by suitable selec tion and stop rules. The research questions addressed are: RQ1) “does the topology (diameter) of the network negatively influence the precision of predicted grades?”̀ in the affirmative case, RQ2) “are we able to reduce the negative effects of high diameter networks through an appropriate choice of the subset of students to be corrected by the teacher?” We show that RQ1) OpenAnswer is less effective on higher diameter topologies, RQ2) this can be avoided if the subset of corrected students is chosen considering the network topology

    Modeling peer assessment as a personalized predictor of teacher's grades: The case of OpenAnswer

    Get PDF
    Questions with open answers are rarely used as e-learning assessment tools because of the resulting high workload for the teacher/tutor that should grade them. This can be mitigated by having students grade each other's answers, but the uncertainty on the quality of the resulting grades could be high. In our OpenAnswer system we have modeled peer-assessment as a Bayesian network connecting a set of sub-networks (each representing a participating student) to the corresponding answers of her graded peers. The model has shown good ability to predict (without further info from the teacher) the exact teacher mark and a very good ability to predict it within 1 mark from the right one (ground truth). From the available datasets we noticed that different teachers sometimes disagree in their assessment of the same answer. For this reason in this paper we explore how the model can be tailored to the specific teacher to improve its prediction ability. To this aim, we parametrically define the CPTs (Conditional Probability Tables) describing the probabilistic dependence of a Bayesian variable from others in the modeled network, and we optimize the parameters generating the CPTs to obtain the smallest average difference between the predicted grades and the teacher's marks (ground truth). The optimization is carried out separately with respect to each teacher available in our datasets, or respect to the whole datasets. The paper discusses the results and shows that the prediction performance of our model, when optimized separately for each teacher, improves against the case in which our model is globally optimized respect to the whole dataset, which in turn improves against the predictions of the raw peer-assessment. The improved prediction would allow us to use OpenAnswer, without teacher intervention, as a class monitoring and diagnostic tool

    Supporting mediated peer-evaluation to grade answers to open-ended questions

    Get PDF
    We show an approach to semi-automatic grading of answers given by students to open ended questions (open answers). We use both peer-evaluation and teacher evaluation. A learner is modeled by her Knowledge and her assessments quality (Judgment). The data generated by the peer- and teacher- evaluations, and by the learner models is represented by a Bayesian Network, in which the grades of the answers, and the elements of the learner models, are variables, with values in a probability distribution. The initial state of the network is determined by the peer-assessment data. Then, each teacher’s grading of an answer triggers evidence propagation in the network. The framework is implemented in a web-based system. We present also an experimental activity, set to verify the effectiveness of the approach, in terms of correctness of system grading, amount of required teacher's work, and correlation of system outputs with teacher’s grades and student’s final exam grade
    corecore