10 research outputs found

    Automated Japanese essay scoring system:jess

    Full text link
    We have developed an automated Japanese essay scoring system named jess. The system evaluates an essay from three features: (1) Rhetoric | ease of read-ing, diversity of vocabulary, percentage of big words (long, dicult words), and percentage of passive sen-tences, (2) Organization | characteristics associated with the orderly presentation of ideas, such as rhetori-cal features and linguistic cues, (3) Contents | vocab-ulary related to the topic, such as relevant information and precise or specialized vocabulary. The nal eval-uated score is calculated by deducting from a perfect score assigned by a learning process using editorial

    Handwriting recognition and automatic scoring for descriptive answers in Japanese language tests

    Full text link
    This paper presents an experiment of automatically scoring handwritten descriptive answers in the trial tests for the new Japanese university entrance examination, which were made for about 120,000 examinees in 2017 and 2018. There are about 400,000 answers with more than 20 million characters. Although all answers have been scored by human examiners, handwritten characters are not labeled. We present our attempt to adapt deep neural network-based handwriting recognizers trained on a labeled handwriting dataset into this unlabeled answer set. Our proposed method combines different training strategies, ensembles multiple recognizers, and uses a language model built from a large general corpus to avoid overfitting into specific data. In our experiment, the proposed method records character accuracy of over 97% using about 2,000 verified labeled answers that account for less than 0.5% of the dataset. Then, the recognized answers are fed into a pre-trained automatic scoring system based on the BERT model without correcting misrecognized characters and providing rubric annotations. The automatic scoring system achieves from 0.84 to 0.98 of Quadratic Weighted Kappa (QWK). As QWK is over 0.8, it represents an acceptable similarity of scoring between the automatic scoring system and the human examiners. These results are promising for further research on end-to-end automatic scoring of descriptive answers.Comment: Keywords: handwritten Japanese answers, handwriting recognition, automatic scoring, ensemble recognition, deep neural networks; Reported in IEICE technical report, PRMU2021-32, pp.45-50 (2021.12) Published after peer review and Presented in ICFHR2022, Lecture Notes in Computer Science, vol. 13639, pp. 274-284 (2022.11

    An Exploratory Study of the Inputs for Ensemble Clustering Technique as a Subset Selection Problem

    Get PDF
    Ensemble and Consensus Clustering address the problem of unifying multiple clustering results into a single output to best reflect the agreement of input methods. They can be used to obtain more stable and robust clustering results in comparison with a single clustering approach. In this study, we propose a novel subset selection method that looks at controlling the number of clustering inputs and datasets in an efficient way. The authors propose a number of manual selection and heuristic search techniques to perform the selection. Our investi‐ gation and experiments demonstrate very promising results. Using these techni‐ ques can ensure better selection methods and datasets for Ensemble and Consensus Clustering and thus more efficient clustering results

    A Study on Clustering Method by Self-Organizing Map and Information Criteria

    No full text

    A Method of Lifetime Analysis Based on Small Censored Date

    No full text
    corecore