4,917 research outputs found

    Automata Tutor v3

    Full text link
    Computer science class enrollments have rapidly risen in the past decade. With current class sizes, standard approaches to grading and providing personalized feedback are no longer possible and new techniques become both feasible and necessary. In this paper, we present the third version of Automata Tutor, a tool for helping teachers and students in large courses on automata and formal languages. The second version of Automata Tutor supported automatic grading and feedback for finite-automata constructions and has already been used by thousands of users in dozens of countries. This new version of Automata Tutor supports automated grading and feedback generation for a greatly extended variety of new problems, including problems that ask students to create regular expressions, context-free grammars, pushdown automata and Turing machines corresponding to a given description, and problems about converting between equivalent models - e.g., from regular expressions to nondeterministic finite automata. Moreover, for several problems, this new version also enables teachers and students to automatically generate new problem instances. We also present the results of a survey run on a class of 950 students, which shows very positive results about the usability and usefulness of the tool

    An Analysis of Programming Course Evaluations Before and After the Introduction of an Autograder

    Full text link
    Commonly, introductory programming courses in higher education institutions have hundreds of participating students eager to learn to program. The manual effort for reviewing the submitted source code and for providing feedback can no longer be managed. Manually reviewing the submitted homework can be subjective and unfair, particularly if many tutors are responsible for grading. Different autograders can help in this situation; however, there is a lack of knowledge about how autograders can impact students' overall perception of programming classes and teaching. This is relevant for course organizers and institutions to keep their programming courses attractive while coping with increasing students. This paper studies the answers to the standardized university evaluation questionnaires of multiple large-scale foundational computer science courses which recently introduced autograding. The differences before and after this intervention are analyzed. By incorporating additional observations, we hypothesize how the autograder might have contributed to the significant changes in the data, such as, improved interactions between tutors and students, improved overall course quality, improved learning success, increased time spent, and reduced difficulty. This qualitative study aims to provide hypotheses for future research to define and conduct quantitative surveys and data analysis. The autograder technology can be validated as a teaching method to improve student satisfaction with programming courses.Comment: Accepted full paper article on IEEE ITHET 202

    Reports on a Course for Prospective High School Mathematics Teachers

    Get PDF
    The author describes his design for a course entitled Secondary School Mathematics from an Advanced Viewpoint. He adds subjective comments on how his design has worked in practice

    Massive Open Online Courses (MOOCS): Emerging Trends in Assessment and Accreditation

    Get PDF
    In 2014, Massive Open Online Courses (MOOCs) are expected to witness a phenomenal growth in student registration compared to the previous years (Lee, Stewart, & Claugar-Pop, 2014). As MOOCs continue to grow in number, there has been an increasing focus on assessment and evaluation. Because of the huge enrollments in a MOOC, it is impossible for the instructor to grade homework and evaluate each student. The enormous data generated by learners in a MOOC can be used for developing and refining automated assessment techniques. As a result, “Smart Systems” are being designed to track and predict learner behavior while completing MOOC assessments. These automated assessments for MOOCs can automatically score and provide feedback to students multiple choice questions, mathematical problems and essays. Automated assessments help teachers with grading and also support students in the learning processes. Theseassessments are prompt, consistent, and support objectivity in assessment and evaluation (Ala-Mutka, 2005). This paper reviews the emerging trends in MOOC assessments and their application in supporting student learning and achievement. The paper concludes by describing how assessment techniques in MOOCs can help to maximize learning outcomes.AbstractIn 2014, Massive Open Online Courses (MOOCs) are expected towitness a phenomenal growth in student registration compared to the previous years. As MOOCs continue to grow in number, therehas been an increasing focus on assessment and evaluation. Because of the huge enrollments in a MOOC, it is impossible for the instructor to grade homework and evaluate each student. The enormous data generated by learners in a MOOC can be used for developing and refining automated assessment techniques. As a result, "Smart Systems" are being designed to track and predict learner behavior while completing MOOC assessments. These automated assessments for MOOCs can automatically score and provide feedback to students multiple choice questions, mathematical problems and essays. Automated assessments help teachers with grading and also support students in the learning processes. These assessments are prompt, consistent, and support objectivity in assessment and evaluation (Ala-Mutka, 2005). This paper reviews the emerging trends in MOOC assessments and their application in supporting student learning and achievement. The paper concludes by describing how assessment techniques in MOOCs can help to maximize learning outcomes

    From Walls to Steps: Using online automatic homework checking tools to improve learning in introductory programming courses

    Get PDF
    We describe the motivation, design, and implementation of a web-based automatic homework checker for Programming I and Programming II courses. Motivated by a problem-based-learning approach, we redesigned our first course to have over 70 short programming assignments. The goal was to change conceptual walls into steps , so that students would not feel overwhelmed at any point in time. At each step along the way, it must be clear where the student is and the next step must feel attainable. Over the last 3 years, we have learned much about proper step-size and sequencing of problems. We describe how current computer science technologies both hurt and help our students. We conclude by a critique of the system, recommendations for undergraduate programming courses, and our goals for the next release
    corecore