7 research outputs found

    The attitudes of students towards the use of ICT during their studies

    Get PDF
    The purpose of this study was to examine attitudes of students towards the use of ICT during their studies. The survey included 285 students from Girne American University, Turkey. The sample consists of the students from first to fourth grade. The scale consists of 45 claims and is intended for determining students’ attitudes towards the use of ICT during their studies. Questionnaires were utilized to gather information on the attitudes of students towards the use of ICT during their studies. Information was broke down using SPSS statistical package (IBM SPS Statistics Version 20) and with Monte Carlo program for the parallel examination. The study established that students towards the utilization of ICT made the significant commitment to the students' academic performance. It is advanced that the discoveries of this study will be valuable to students to acquire knowledge into the ICT variables that influence students' scholastic execution and consequently enhance their academic performance. Results indicate that after one year of the intervention, there were statistically significant differences between the two groups only in sight vocabulary (at kindergarten and grade 1) and in alphabet (kindergarten). In all other areas of language development, there were no statistically significant differences between the achievement scores of the two groups. Results show that students appear to react to the prerequisites of their courses, programs, and universities. In all cases, there is a reasonable relationship between the students' view of handiness on certain ICT assets the critical quantities of understudies are quick to see ICT misused in the educating and learning process. There is positively scope for further research to examine how ICT interfaces crosswise over various areas and how the cross-connection application impacts its utilization

    Investigating a learning analytics interface for automatically marked programming assessments

    Get PDF
    Student numbers at the University of Cape Town continue to grow, with an increasing number of students enrolling to study programming courses. With this increase in numbers, it becomes difficult for lecturers to provide individualised feedback on programming assessments submitted by students. To solve this, the university utilises an automatic marking tool for marking assignments and providing feedback. Students can submit assignments and receive instant feedback on marks allocated or errors in their submissions. This tool saves time as lecturers spend less time on marking and provides instant feedback on submitted code, hence providing the student with an opportunity to correct errors in their submitted code. However, most students have identified areas where improvements can be made on the interface between the automatic marker and the submitted programs. This study investigates the potential of creating a learning analytics inspired dashboard interface to improve the feedback provided to students on their submitted programs. A focus group consisting of computer science class representatives was organised, and feedback from this focus group was used to create dashboard mock-ups. These mock-ups were then used to develop high-fidelity learning analytics inspired dashboard prototypes that were tested by first-year computer science students to determine if the interfaces were useful and usable. The prototypes were designed using the Python programming language and Plotly Python library. User-centred design methods were employed by eliciting constant feedback from students during the prototyping and design of the learning analytics inspired interface. A usability study was employed where students were required to use the dashboard and then provide feedback on its use by completing a questionnaire. The questionnaire was designed using Nielsen's Usability Heuristics and AttrakDiff. These methods also assisted in the evaluation of the dashboard design. The research showed that students considered a learning analytics dashboard as an essential tool that could help them as they learn to program. Students found the dashboard useful and had an overall understanding of the specific features they would like to see implemented on a learning analytics inspired dashboard used by the automatic marking tool. Some of the specific features mentioned by students include overall performance, duly performed needed to qualify for exams, highest score, assignment due dates, class average score, and most common errors. This research hopes to provide insight on how automatically marked programming assessments could be displayed to students in a way that supports learning

    Çevrimiçi Ölçme ve Değerlendirme Sistemlerinin Gerçekleşen Kullanımını Belirleyici Faktörler: Bir Yapısal Eşitlik Modellemesi (YEM) Çalışması

    Get PDF
    E-Learning has been an emerging topic for education world for two decades, and due to this fact; it has been analyzed with all of its dimensions and factors. However, the evaluation and assessment issues of e-learning can be assumed as untouched especially in students’ perspective. Evaluation and assessment concepts of students’ cognitive, affective and behavioral domains have a lot of variables that affect each other. In this study, predictors of students’ actual usage of online education systems and their relations are designed as a new theoretical framework, and analyzed by applying Structural Equation Model (SEM) analysis. For this analysis, an online questionnaire is applied to the students who have been using a uniquely designed online evaluation and assessment system (OEAS) for five years in a governmental high school. As a result of the study, “self-efficacy” and “user interface design” are found as significantly effective on “perceived ease of use”, while “self-efficacy” and “perceived ease of use” have significant influence on “perceived usefulness”. In addition, it is found that actual usage of online evaluation and assessment system is directly and significantly affected from “perceived usefulness”, “technical support” and “service quality”. To sum up, the conclusions of the study, in students’ perspective, are advisable for the educational technologists.Son yirmi yıldır e-öğrenme eğitim dünyası için yükselen bir eğilimdir ve bundan dolayı bütün boyutları ve faktörleri ile analiz edilmektedir. Buna karşın, e-öğrenmenin ölçme ve değerlendirme kısımları, özellikle öğrenci bakış açısından, el değmemiş alanlar olarak kabul edilebilir. Öğrencilerin bilişsel, duyuşsal ve davranışsal düzeylerdeki ölçülmesi ve değerlendirilmesi kavramları birbirini etkileyen birçok değişkene sahiptir. Bu çalışmada, öğrencilerin çevrimiçi eğitim sistemlerini kullanımlarının belirleyici faktörleri ve bunların arasındaki ilişkilerin oluşturduğu yeni bir kuramsal çerçeve tasarlanmış ve Yapısal Eşitlik Modellemesi (YEM) uygulanarak analiz edilmiştir. Bu analiz için, bir devlet lisesinde özgün tasarımlı bir Çevrimiçi Ölçme ve Değerlendirme Sistemi’ni (ÇÖDS) beş yıldır kullanan öğrencilere bir çevrimiçi anket uygulanmıştır. Çalışmanın sonucunda, “öz-yeterlilik” ve “algılanan kullanım kolaylığının”, “algılanan kullanışlılık” üstünde anlamlı etkisi varken, “öz-yeterlilik” ve “kullanıcı arayüzü tasarımının” da “algılanan kullanım kolaylığı” üzerinde anlamlı olarak etkili olduğu bulunmuştur. Buna ek olarak, çevrimiçi ölçme ve değerlendirme sisteminin “gerçekleşen kullanımının”, “algılanan kullanışlılık”, “teknik destek” ve “servis kalitesinden” doğrudan ve anlamlı olarak etkilendiği görülmüştür. Özetle, çalışmanın sonuçları, eğitim teknoloji uzmanları için öğrencilerin bakış açısından önemli bir tavsiye niteliğindedir

    Automatic Short Answer Grading Using Transformers

    Get PDF
    RÉSUMÉ : L’évaluation des réponses courtes en langage naturel est une tendance dominante dans tout environnement éducatif. Ces techniques ont le potentiel d’aider les enseignants à mieux comprendre les réussites et les échecs de leurs élèves. En comparaison, les autres types d’évaluation ne mesurent souvent pas adéquatement les compétences des élèves, telles que les questions à choix multiples ou celles où il faut combler des espaces. Cependant, ce sont les moyens les plus fréquemment utilisés pour évaluer les élèves, en particulier dans les envi-ronnements de cours en ligne ouverts (MOOCs). La raison de leur emploi fréquent est que ces questions sont plus simples à corriger avec un ordinateur. Comparativement, devoir com-prendre et noter manuellement des réponses courtes est une tâche plus diÿcile et plus longue, d’autant plus en considérant le nombre croissant d’élèves en classe. La notation automatique de réponses courtes, généralement abrégée de l’anglais par ASAG, est une solution parfaite-ment adaptée à ce problème. Dans ce mémoire, nous nous concentrons sur le ASAG basé sur la classification avec des notes nominales, telles que correct ou incorrect. Nous proposons une approche par référence basée sur un modèle d’apprentissage profond, que nous entraînons sur quatre ensembles de données ASAG de pointe, à savoir SemEval-2013 (SciEntBank et BEETLE), Dt-grade et un jeu de données sur la biologie. Notre approche utilise les modèles BERT Base (sensible à la casse ou non) et XLNET Base (seulement sensible à la casse). Notre analyse subséquente emploie les ensembles de données GLUE (General Language Un-derstanding Evaluation), incluant des tâches de questions-réponses, d’implication textuelle, d’identification de paraphrases et d’analyse de similitude textuelle sémantique (STS). Nous démontrons que celles-ci contribuent à une meilleure performance des modèles sur la tâche ASAG, surtout avec le jeu de données SciEntBank.---------- ABSTRACT : Assessment of short natural language answers is a prevailing trend in any educational envi-ronment. It helps teachers to understand better the success and failure of students. Other types of questions such as multiple-choice or fill-in-the-gap questions don’t provide adequate clues for evaluating the students’ proficiency exhaustively. However, they are common means of student evaluation especially in Massive Open Online Courses (MOOCs) environments. One of the major reasons is that they are fairly easy to be graded. Nonetheless, understand-ing and marking manually short answers are more challenging and time-consuming tasks, especially when the number of students grows in a class. Automatic Short Answer Grading, usually abbreviated to ASAG, is a highly demanding solution in this current context. In this thesis, we mainly concentrate on classification-based ASAG with nominal grades such as correct or not correct. We propose a reference-based approach based on a deep learn-ing model on four ASAG state-of-the-art datasets, namely SemEval-2013 (SciEntBank and BEETLE), Dt-grade and Biology dataset. Our approach is based on BERT (cased and un-cased) and XLNET (cased) models. Our secondary analysis includes how GLUE (General Language Understanding Evaluation) tasks such as question answering, entailment, para-phrase identification and semantic textual similarity analysis strengthen the ASAG task on SciEntBank dataset. We show that language models based on transformers such as BERT and XLNET outperform or equal the state-of-the-art feature-based approaches. We further indicate that the performance of our BERT model increases substantially when we fine-tune a BERT model on an entailment task such as the GLUE MNLI dataset and then on the ASAG task compared to the other GLUE models

    Automated analysis of software artefacts - a use case in e-assessment

    Get PDF
    Automated grading and feedback generation for programming and modeling exercises has become a usual means of supporting teachers and students at universities and schools. Tools used in this context engage general software engineering techniques for the analysis of software artefacts. Experiences with the current state-of-the-art show good results, but also a gap between the potential power of such techniques and the actual power used in current e-assessment systems. This thesis contributes to closing this gap by developing and testing approaches that are more universal than the currently used approaches and provide novel means of feedback generation. It can be shown that these approaches can be used effectively and efficiently for the mass validation of exercises, and that they result in a high feedback quality according to students' perception

    The evaluation of electronic marking of examinations

    No full text
    This paper discusses an approach to the electronic (automatic) marking of examination papers, in particular, the extent to which it is possible to mark a candidate’s answers automatically and return, within a very short period of time, a result that would be comparable with a manually produced score. The investigation showed that there are good reasons for manual intervention in a predominantly automatic process. The paper discusses the results of tests of the automatic marking process that in two experiments yielded grades for examination scripts that are comparable with human markers (although the automatic grade tends to be the lower of the two). An analysis of the correlations between the human and automatic markers shows highly significant relationships between the human markers (between 0.91 and 0.95) and a significant relationship between the average human marker score and the electronic score (0.86)

    The evaluation of electronic marking of examinations

    No full text
    corecore