882 research outputs found

    An Automated Grading and Feedback System for a Computer Literacy Course

    Get PDF
    Computer Science departments typically offer a computer literacy course that targets a general lay audience. At Appalachian State University, this course is CS1410 - Introduction to Computer Applications. computer literacy courses have students work with various desktop and web-based software applications, including standard office applications. CS1410 strives to have students use well known applications in new and challenging ways, as well as exposing them to some unfamiliar applications. These courses can draw large enrollments which impacts efficient and consistent grading. This thesis describes the development and successful deployment of the Automated Grading And Feedback (AGAF) system for CS1410. Specifically, a suite of automated grading tools targeting the different types of CS1410 assignments has been built. The AGAF system tools have been used on actual CS1410 submissions and the resulting grades were verified. AGAF tools exist for Microsoft Office assignments requiring students to upload a submission file. Another AGAF tool accepts a student “online text submission” where the text encodes the URL of a Survey Monkey survey and a blog. Other CS1410 assignments require students to upload an image file. AGAF can process images in multiple ways, including decoding of a QR two-dimensional barcode and identification of an expected image pattern

    Automata Tutor v3

    Full text link
    Computer science class enrollments have rapidly risen in the past decade. With current class sizes, standard approaches to grading and providing personalized feedback are no longer possible and new techniques become both feasible and necessary. In this paper, we present the third version of Automata Tutor, a tool for helping teachers and students in large courses on automata and formal languages. The second version of Automata Tutor supported automatic grading and feedback for finite-automata constructions and has already been used by thousands of users in dozens of countries. This new version of Automata Tutor supports automated grading and feedback generation for a greatly extended variety of new problems, including problems that ask students to create regular expressions, context-free grammars, pushdown automata and Turing machines corresponding to a given description, and problems about converting between equivalent models - e.g., from regular expressions to nondeterministic finite automata. Moreover, for several problems, this new version also enables teachers and students to automatically generate new problem instances. We also present the results of a survey run on a class of 950 students, which shows very positive results about the usability and usefulness of the tool

    Computer Aided Design and Grading for an Electronic Functional Programming Exam

    Full text link
    Electronic exams (e-exams) have the potential to substantially reduce the effort required for conducting an exam through automation. Yet, care must be taken to sacrifice neither task complexity nor constructive alignment nor grading fairness in favor of automation. To advance automation in the design and fair grading of (functional programming) e-exams, we introduce the following: A novel algorithm to check Proof Puzzles based on finding correct sequences of proof lines that improves fairness compared to an existing, edit distance based algorithm; an open-source static analysis tool to check source code for task relevant features by traversing the abstract syntax tree; a higher-level language and open-source tool to specify regular expressions that makes creating complex regular expressions less error-prone. Our findings are embedded in a complete experience report on transforming a paper exam to an e-exam. We evaluated the resulting e-exam by analyzing the degree of automation in the grading process, asking students for their opinion, and critically reviewing our own experiences. Almost all tasks can be graded automatically at least in part (correct solutions can almost always be detected as such), the students agree that an e-exam is a fitting examination format for the course but are split on how well they can express their thoughts compared to a paper exam, and examiners enjoy a more time-efficient grading process while the point distribution in the exam results was almost exactly the same compared to a paper exam.Comment: In Proceedings TFPIE 2023, arXiv:2308.0611

    E-assessment: Past, present and future

    Get PDF
    This review of e-assessment takes a broad definition, including any use of a computer in assessment, whilst focusing on computer-marked assessment. Drivers include increased variety of assessed tasks and the provision of instantaneous feedback, as well as increased objectivity and resource saving. From the early use of multiple-choice questions and machine-readable forms, computer-marked assessment has developed to encompass sophisticated online systems, which may incorporate interoperability and be used in students’ own homes. Systems have been developed by universities, companies and as part of virtual learning environments. Some of the disadvantages of selected-response question types can be alleviated by techniques such as confidence-based marking. The use of electronic response systems (‘clickers’) in classrooms can be effective, especially when coupled with peer discussion. Student authoring of questions can also encourage dialogue around learning. More sophisticated computer-marked assessment systems have enabled mathematical questions to be broken down into steps and have provided targeted and increasing feedback. Systems that use computer algebra and provide answer matching for short-answer questions are discussed. Computer-adaptive tests use a student’s response to previous questions to alter the subsequent form of the test. More generally, e-assessment includes the use of peer-assessment and assessed e-portfolios, blogs, wikis and forums. Predictions for the future include the use of e-assessment in MOOCs (massive open online courses); the use of learning analytics; a blurring of the boundaries between teaching, assessment and learning; and the use of e-assessment to free human markers to assess what they can assess more authentically

    MOOClm: Learner Modelling for MOOCs

    Get PDF
    Massively Open Online Learning systems, or MOOCs, generate enormous quantities of learning data. Analysis of this data has considerable potential benefits for learners, educators, teaching administrators and educational researchers. How to realise this potential is still an open question. This thesis explores use of such data to create a rich Open Learner Model (OLM). The OLM is designed to take account of the restrictions and goals of lifelong learner model usage. Towards this end, we structure the learner model around a standard curriculum-based ontology. Since such a learner model may be very large, we integrate a visualisation based on a highly scalable circular treemap representation. The visualisation allows the student to either drill down further into increasingly detailed views of the learner model, or filter the model down to a smaller, selected subset. We introduce the notion of a set of Reference learner models, such as an ideal student, a typical student, or a selected set of learning objectives within the curriculum. Introducing these provides a foundation for a learner to make a meaningful evaluation of their own model by comparing against a reference model. To validate the work, we created MOOClm to implement this framework, then used this in the context of a Small Private Online Course (SPOC) run at the University of Sydney. We also report a qualitative usability study to gain insights into the ways a learner can make use of the OLM. Our contribution is the design and validation of MOOClm, a framework that harnesses MOOC data to create a learner model with an OLM interface for student and educator usage

    Integrating knowledge tracing and item response theory: A tale of two frameworks

    Get PDF
    Traditionally, the assessment and learning science commu-nities rely on different paradigms to model student performance. The assessment community uses Item Response Theory which allows modeling different student abilities and problem difficulties, while the learning science community uses Knowledge Tracing, which captures skill acquisition. These two paradigms are complementary - IRT cannot be used to model student learning, while Knowledge Tracing assumes all students and problems are the same. Recently, two highly related models based on a principled synthesis of IRT and Knowledge Tracing were introduced. However, these two models were evaluated on different data sets, using different evaluation metrics and with different ways of splitting the data into training and testing sets. In this paper we reconcile the models' results by presenting a unified view of the two models, and by evaluating the models under a common evaluation metric. We find that both models are equivalent and only differ in their training procedure. Our results show that the combined IRT and Knowledge Tracing models offer the best of assessment and learning sciences - high prediction accuracy like the IRT model, and the ability to model student learning like Knowledge Tracing

    DeepEval: An Integrated Framework for the Evaluation of Student Responses in Dialogue Based Intelligent Tutoring Systems

    Get PDF
    The automatic assessment of student answers is one of the critical components of an Intelligent Tutoring System (ITS) because accurate assessment of student input is needed in order to provide effective feedback that leads to learning. But this is a very challenging task because it requires natural language understanding capabilities. The process requires various components, concepts identification, co-reference resolution, ellipsis handling etc. As part of this thesis, we thoroughly analyzed a set of student responses obtained from an experiment with the intelligent tutoring system DeepTutor in which college students interacted with the tutor to solve conceptual physics problems, designed an automatic answer assessment framework (DeepEval), and evaluated the framework after implementing several important components. To evaluate our system, we annotated 618 responses from 41 students for correctness. Our system performs better as compared to the typical similarity calculation method. We also discuss various issues in automatic answer evaluation
    • 

    corecore