1 research outputs found

    Automated assessment of open-ended student answers in tutorial dialogues using Gaussian Mixture Models

    No full text
    Open-ended student answers often need to be assessed in context. However, there are not many previous works that consider context when automatically assessing student answers. Furthermore, student responses vary significantly in their explicit content and writing style which leads to a wide range of assessment scores for the same qualitative assessment category, e.g. correct answers vs. incorrect answers. In this paper, we propose an approach to assessing student answers that takes context into account and which handles variability using probabilistic Gaussian Mixture Models (GMMs). We developed the model using a recently released corpus called DT-Grade which was manually annotated, taking context into account, with four different levels of answer correctness. Our best GMM model outperforms the baseline model with a margin of 9% in terms of accuracy
    corecore