968 research outputs found

    "Paperless Grading" of Handwritten Homework: Electronic Process and Assessment

    Get PDF
    The typical engineering homework assignment may involve sketches, formulas with special symbols, as well as calculation steps. The most time efficient way for students to do this work is by hand, on paper. In terms of grading such assignments, it is faster and easier for instructors to handwrite comments than to add typewritten comments via text box, sticky note, etc. The computer and printing technologies available to instructors and students have progressed to the point where the use of electronic submission and grading for assigned work is viable, both in terms of ease of use and the benefits accrued – even for handwritten assignments. The goal for this project is to implement and assess a “paperless grading” process for handwritten homework assignments which allows for both electronic submission and return of the assignments. This process also allows the grader to “mark-up” the papers with handwritten comments. In spring 2013, the 33 students in the “Engineering Systems” class participated in a semester long trial of a paperless grading process for their homework assignments. An iPad was used as the homework grading platform coupled with the university’s course management system. At the end of the semester, the students completed a survey with questions which asked them to compare the paperless process with the process associated with the more traditional homework submission and grading process as well as their opinions on possible benefits or disadvantages of the paperless process. Also included were questions asking for their suggestions for improvements on the paperless process. Student feedback from this first trial was used to make some enhancements to the paperless process which was repeated in the spring 2014 offering of the class with 60 students who then completed the same survey used in 2013. This study shows that the penultimate “hold out” of going paperless – handwritten homework – can be accomplished with the hybrid process to be described in this paper. All participants in both semesters – students, graders, and instructor – concur that paperless grading is the way to go. In this paper, a more detailed description of the paperless grading process for handwritten homework will be presented. In addition, the quantitative results of the project evaluation for both semesters will be discussed as well as suggestions for future improvements in the process

    Informing Writing: The Benefits of Formative Assessment

    Get PDF
    Examines whether classroom-based formative writing assessment - designed to provide students with feedback and modified instruction as needed - improves student writing and how teachers can improve such assessment. Suggests best practices

    Indonesian Automated Essay Scoring with Bag of Word and Support Vector Regression

    Get PDF
    Essay is one of the test questions to measure students' understanding of learning. Respondents can organize the answers to each question in their own language style, so it takes time to make corrections. It takes a system that can assess essay answers automatically quickly and accurately. Auto Essay Scoring (AES) is a tool that can assign grades or scores to answers in the form of essays automatically. In giving grades automatically, AES requires machine learning with training data that contains answer data that has been given a value by the assessor. In this study, AES was used to assess the Indonesian language midterm exams using the Bag of Word extraction feature and using Support Vector Regression. The Root Mean Square Error value obtained when evaluating AES is 1.99

    An automated essay evaluation system using natural language processing and sentiment analysi

    Get PDF
    An automated essay evaluation system is a machine-based approach leveraging long short-term memory (LSTM) model to award grades to essays written in English language. natural language processing (NLP) is used to extract feature representations from the essays. The LSTM network learns from the extracted features and generates parameters for testing and validation. The main objectives of the research include proposing and training an LSTM model using a dataset of manually graded essays with scores. Sentiment analysis is performed to determine the sentiment of the essay as either positive, negative or neutral. The twitter sample dataset is used to build sentiment classifier that analyzes the sentiment based on the student’s approach towards a topic. Additionally, each essay is subjected to detection of syntactical errors as well as plagiarism check to detect the novelty of the essay. The overall grade is calculated based on the quality of the essay, the number of syntactic errors, the percentage of plagiarism found and sentiment of the essay. The corrected essay is provided as feedback to the students. This essay grading model has gained an average quadratic weighted kappa (QWK) score of 0.911 with 99.4% accuracy for the sentiment analysis classifier

    Enhancing Essay Scoring with Adversarial Weights Perturbation and Metric-specific AttentionPooling

    Full text link
    The objective of this study is to improve automated feedback tools designed for English Language Learners (ELLs) through the utilization of data science techniques encompassing machine learning, natural language processing, and educational data analytics. Automated essay scoring (AES) research has made strides in evaluating written essays, but it often overlooks the specific needs of English Language Learners (ELLs) in language development. This study explores the application of BERT-related techniques to enhance the assessment of ELLs' writing proficiency within AES. To address the specific needs of ELLs, we propose the use of DeBERTa, a state-of-the-art neural language model, for improving automated feedback tools. DeBERTa, pretrained on large text corpora using self-supervised learning, learns universal language representations adaptable to various natural language understanding tasks. The model incorporates several innovative techniques, including adversarial training through Adversarial Weights Perturbation (AWP) and Metric-specific AttentionPooling (6 kinds of AP) for each label in the competition. The primary focus of this research is to investigate the impact of hyperparameters, particularly the adversarial learning rate, on the performance of the model. By fine-tuning the hyperparameter tuning process, including the influence of 6AP and AWP, the resulting models can provide more accurate evaluations of language proficiency and support tailored learning tasks for ELLs. This work has the potential to significantly benefit ELLs by improving their English language proficiency and facilitating their educational journey.Comment: This article was accepted by 2023 International Conference on Information Network and Computer Communications(INCC

    (AI)N’T NOBODY HELPING ME? – DESIGN AND EVALUATION OF A MACHINE-LEARNING-BASED SEMI-AUTOMATIC ESSAY SCORING SYSTEM

    Get PDF
    Education is increasingly being delivered digitally these days, whether due to the COVID-19 pandemic or the growing popularity of MOOCs. The increasing number of participants poses a challenge for institutions to balance didactical and financial demands, especially for exams. The overall goal of this design science research project is to design and evaluate a semi-automatic machine-learning-based scoring system for essays. We focus on the design of a functional software artifact including the required design principles and an exemplary implementation of an algorithm, the optimization of which is, however, not part of this project and subject for future research. Our results show that such a system is suitable for both scoring essay assignments and documenting these scorings. In addition to the software artifact, we document our results using the work of Gregor et al. (2020). This provides a first step towards a design theory for semi-automatic, machine-learning-based scoring systems for essays

    Using digital pens to expedite the marking procedure

    Get PDF
    This is the Post-print version of the Article. The official published version can be accessed from the link below - Copyright @ 2010 Inderscience PublishersDigital pens have been introduced over the last six years and have demonstrated that they can be used effectively for collecting, processing and storing data. These properties make them ideal for use in education, particularly in the marking procedure of multiple-choice questions (MCQ). In this report, we present a system that was designed to expedite the marking procedure of MCQ, for use at any educational level. The main element of the system is a digital pen, i.e. given to the students prior to the examination. On return of the pen, the system immediately recognises the students' answers and produces their results. In this specific research, four groups of students were studied and a variety of data were collected, concerning issues, such as accuracy, time gained by the use of the system and the impressions of the students. The pedagogic value of the use of the system is also presented

    Assessment Of Descriptive Answers in Moodle-based E-learning using Winnowing Algorithm

    Get PDF
    Assessment in education allows for obtaining, organizing, and presenting information about how much and how well the student is learning. An automatic evaluation tool is proposed that allows the assessor to evaluate descriptive answer at any time and receive instant feedback of the students. Due to the lack of descriptive answer grading in Moodle-based E-learning system, there is a need to build a model and also add this feature as a plug-in for the E-learning system .Up until today, most assessors still choose to examine descriptive document manually for each student document. This method takes time where assessor needs to be focused and thorough while examining a number of descriptive documents. This reason often affects essay examinations to be less objective and not optimal. To improve objectivity, time efficiency and fair correction in descriptive answers assessment process, a system is needed that can automatically assess student documents, or in other words a descriptive answer evaluating system. An evaluation of descriptive answer system, works by analysing student answer document with model answer document. The higher the semantic similarity, the higher the score obtained.The purpose of this paper is to check the similarity between the teacher’s answer and the student’s answer using Winnowing algorithm. Winnowing algorithm is one of the document fingerprinting algorithms that can be used to detect document similarity by using hashing technique. The fingerprint document itself is a method used to detect document similarities with other documents. The Winnowing algorithm has fulfilled one of the requirements of the plagiarism algorithm, which is whitespace insensitivity, disposing of irrelevant characters such as punctuation. The similarity value is calculated using Jaccard Coefficient. Later this assessment is used for grading the student’s performance

    La mejora en los errores léxicos de la expresión escrita en inglés como lengua extranjera mediante el uso de feedback correctivo automatizado

    Get PDF
    This research aims at revisiting the role of software when it comes to providing learners with corrective feedback on their pieces of writing. The study, based on the analysis of handwritten and software-corrected versions of essays written by 33 undergraduate students enrolled in the undergraduate degree programme in English Studies at a Spanish University contributed to confirming the assumption that technology can indeed be a useful tool in the teaching and learning process. More specifically, this study demonstrated that students could reduce significantly the number of lexical errors in their essays through the autonomous use of error-correction software and that, over time, the students can improve on their ability to avoid such errors. Nevertheless, the study has also confirmed that software can in no way completely replace teachers, as computer programming is quite limited and there are errors that only proficient language users can detect and correct.El objetivo de este trabajo de investigación es reconsiderar el papel de las herramientas informáticas para proporcionar feedback sobre los errores de alumnos y mejorar su escritura. Este estudio, basado en el análisis de las versiones escritas y correjidas por programa de varias composiciones elaboradas por 33 estudiantes del Grado en estudios ingleses por una universidad española, nos llevó a la conclusión de que la tecnología puede ser una herramienta de gran utilidad en el proceso de enseñanza y aprendizaje en contextos de enseñanza a distancia. En este trabajo se vió que los estudiantes pudieron reducir significativamente el número de errores léxicos en sus composiciones mediante el uso autónomo de herramientas tecnológicas de corrección de errores y que, a lo largo del tiempo, los estudiantes mejoraron su capacidad para evitar tales errores. Sin embargo, el trabajo también demostró que ninguna herramienta tecnológica puede sustituir completamente a los profesores, ya que la programación informática es de momento limitada y hay errores que sólo los usuarios con conocimientos avanzados en el idioma pueden detectar y corregir
    corecore