6 research outputs found

    Modeling peer assessment as a personalized predictor of teacher's grades: The case of OpenAnswer

    Get PDF
    Questions with open answers are rarely used as e-learning assessment tools because of the resulting high workload for the teacher/tutor that should grade them. This can be mitigated by having students grade each other's answers, but the uncertainty on the quality of the resulting grades could be high. In our OpenAnswer system we have modeled peer-assessment as a Bayesian network connecting a set of sub-networks (each representing a participating student) to the corresponding answers of her graded peers. The model has shown good ability to predict (without further info from the teacher) the exact teacher mark and a very good ability to predict it within 1 mark from the right one (ground truth). From the available datasets we noticed that different teachers sometimes disagree in their assessment of the same answer. For this reason in this paper we explore how the model can be tailored to the specific teacher to improve its prediction ability. To this aim, we parametrically define the CPTs (Conditional Probability Tables) describing the probabilistic dependence of a Bayesian variable from others in the modeled network, and we optimize the parameters generating the CPTs to obtain the smallest average difference between the predicted grades and the teacher's marks (ground truth). The optimization is carried out separately with respect to each teacher available in our datasets, or respect to the whole datasets. The paper discusses the results and shows that the prediction performance of our model, when optimized separately for each teacher, improves against the case in which our model is globally optimized respect to the whole dataset, which in turn improves against the predictions of the raw peer-assessment. The improved prediction would allow us to use OpenAnswer, without teacher intervention, as a class monitoring and diagnostic tool

    Supporting mediated peer-evaluation to grade answers to open-ended questions

    Get PDF
    We show an approach to semi-automatic grading of answers given by students to open ended questions (open answers). We use both peer-evaluation and teacher evaluation. A learner is modeled by her Knowledge and her assessments quality (Judgment). The data generated by the peer- and teacher- evaluations, and by the learner models is represented by a Bayesian Network, in which the grades of the answers, and the elements of the learner models, are variables, with values in a probability distribution. The initial state of the network is determined by the peer-assessment data. Then, each teacher’s grading of an answer triggers evidence propagation in the network. The framework is implemented in a web-based system. We present also an experimental activity, set to verify the effectiveness of the approach, in terms of correctness of system grading, amount of required teacher's work, and correlation of system outputs with teacher’s grades and student’s final exam grade

    Conception et diffusion de ressources en ligne pour gérer la diversité cognitive des élèves et favoriser leur réussite dans l'apprentissage de l'algèbre

    Get PDF
    54 pages (annexes en plus)Les projets Pépite, LINGOT et PépiMEP résultent d'une longue collaboration entre chercheurs en didactique des mathématiques, chercheurs en informatique, enseignants et formateurs. Ces projets visent d'une part à concevoir des outils à destination des enseignants pour gérer l'hétérogénéité des apprentissages des élèves en algèbre élémentaire en fin de scolarité obligatoire et, d'autre part, à en évaluer les usages réels en classe et leur influence sur l'activité des élèves en algèbre. Le projet PépiMEP a permis la diffusion, sur la plateforme en ligne LaboMEP de l'association Sésamath, accessible gratuitement à grande échelle, des résultats issus des premières recherches sur le diagnostic et l'enseignement adapté aux besoins repérés des élèves

    Evaluating the Performance of a Diagnosis System in School Algebra

    No full text
    Abstract. This paper deals with PĂ©piMep, a diagnosis system in school algebra. Our proposal to evaluate the students' open-ended answers is based on a mixed theoretical and empirical approach. First, researchers in Math Education list different types of anticipated patterns of answers and the way to evaluate them. Then, this information is stored in an XML file used by the system to match a student's input with an anticipated answer. Third, as it is impossible to anticipate every student's answer, the system can improve: when an unknown form is detected, it is added to the XML file after expert inspection. Results from testing 360 students showed that, in comparison with human experts, PĂ©piMep (1) was very effective in recognizing the different types of solutions when students' input was an algebraic expression (2) but was less effective when students entered a reasoned response expressed by a mix of algebraic expressions and natural language utterances

    Evaluating the Performance of a Diagnosis System in School Algebra

    No full text
    International audienceThis paper deals with PépiMep, a diagnosis system in school algebra. Our proposal to evaluate the students’ open-ended answers is based on a mixed theoretical and empirical approach. First, researchers in Math Education list different types of anticipated patterns of answers and the way to evaluate them. Then, this information is stored in an XML file used by the system to match a student’s input with an anticipated answer. Third, as it is impossible to anticipate every student’s answer, the system can improve: when an unknown form is detected, it is added to the XML file after expert inspection. Results from testing 360 students showed that, in comparison with human experts, PépiMep (1) was very effective in recognizing the different types of solutions when students’ input was an algebraic expression (2) but was less effective when students entered a reasoned response expressed by a mix of algebraic expressions and natural language utterances
    corecore