27 research outputs found

    Neural Automated Essay Scoring and Coherence Modeling for Adversarially Crafted Input

    Get PDF
    We demonstrate that current state-of-the-art approaches to Automated Essay Scoring (AES) are not well-suited to capturing adversarially crafted input of grammatical but incoherent sequences of sentences. We develop a neural model of local coherence that can effectively learn connectedness features between sentences, and propose a framework for integrating and jointly training the local coherence model with a state-of-the-art AES model. We evaluate our approach against a number of baselines and experimentally demonstrate its effectiveness on both the AES task and the task of flagging adversarial input, further contributing to the development of an approach that strengthens the validity of neural essay scoring models

    Automatic Essay Scoring Systems Are Both Overstable And Oversensitive: Explaining Why And Proposing Defenses

    Get PDF
    Deep-learning based Automatic Essay Scoring (AES) systems are being actively used in various high-stake applications in education and testing. However, little research has been put to understand and interpret the black-box nature of deep-learning-based scoring algorithms. While previous studies indicate that scoring models can be easily fooled, in this paper, we explore the reason behind their surprising adversarial brittleness. We utilize recent advances in interpretability to find the extent to which features such as coherence, content, vocabulary, and relevance are important for automated scoring mechanisms. We use this to investigate the oversensitivity (i.e., large change in output score with a little change in input essay content) and overstability (i.e., little change in output scores with large changes in input essay content) of AES. Our results indicate that autoscoring models, despite getting trained as “end-to-end” models with rich contextual embeddings such as BERT, behave like bag-of-words models. A few words determine the essay score without the requirement of any context making the model largely overstable. This is in stark contrast to recent probing studies on pre-trained representation learning models, which show that rich linguistic features such as parts-of-speech and morphology are encoded by them. Further, we also find that the models have learnt dataset biases, making them oversensitive. The presence of a few words with high co-occurrence with a certain score class makes the model associate the essay sample with that score. This causes score changes in ∼95% of samples with an addition of only a few words. To deal with these issues, we propose detection-based protection models that can detect oversensitivity and samples causing overstability with high accuracies. We find that our proposed models are able to detect unusual attribution patterns and flag adversarial samples successfully

    Penilaian Esai Pendek Otomatis Berdasarkan Similaritas Semantik dengan SBERT

    Get PDF
    Ujian dalam bentuk soal esai dianggap lebih baik dalam mengukur pemahaman dari pada soal berbentuk pilihan. Namun, jawaban esai memerlukan waktu dan tenaga lebih banyak untuk dievaluasi dan sering terjadi inkonsistensi. Maka dari itu, diperlukan suatu sistem penilaian esai otomatis yang dapat membantu evaluator dalam memberikan nilai dengan lebih cepat dan lebih konsisten. Penelitian ini bertujuan untuk mengevaluasi performa model penilaian esai otomatis dimana teks esai jawaban uji dan kunci jawaban dibandingkan secara semantik untuk mengetahui seberapa besar persamaan antara teks jawaban uji dan kunci jawaban. Semantik dari teks esai diperoleh dengan melakukan word embeddings dengan memanfaatkan model bahasa pretrained Siamese-BERT (SBERT) yang mentransformasi teks esai menjadi vektor sepanjang 512. Proses penilaian esai otomatis ini dimulai dengan praproses pada teks dengan menerapkan case folding, berikutnya word embeddings pada teks yang telah di praproses dengan SBERT. Vektor numerik dari kunci jawaban dan jawaban uji hasil word embeddings kemudian dibandingkan dengan Cosine Similarity untuk mendapatkan similaritas semantik sekaligus nilai esai yang merupakan output model. Evaluasi model penilaian esai otomatis ini dilakukan dengan membandingkan nilai dari model dengan nilai dari evaluator manusia. Pengukuran yang dipakai untuk mengukur performa penilaian esai otomatis ini adalah adalah dengan menghitung Mean Absolute Error (MAE) dan Pearson Correlation, dimana hasil penelitian ini menunjukan nilai rata-rata MAE sebesar 0.26 dan rata-rata korelasi sebesar 0.78

    論述における談話構造および論理構造の解析

    Get PDF
    Tohoku University博士(情報科学)thesi
    corecore