190,599 research outputs found
Semi Automated Partial Credit Grading of Programming Assignments
The grading of student programs is a time consuming process. As class sizes continue to grow, especially in entry level courses, manually grading student programs has become an even more daunting challenge. Increasing the difficulty of grading is the needs of graphical and interactive programs such as those used as part of the UNH Computer Science curriculum (and various textbooks).
There are existing tools that support the grading of introductory programming assignments (TAME and Web-CAT). There are also frameworks that can be used to test student code (JUnit, Tester, and TestNG). While these programs and frameworks are helpful, they have little or no no support for programs that use real data structures or that have interactive or graphical features. In addition, the automated tests in all these tools provide only “all or nothing” evaluation. This is a significant limitation in many circumstances. Moreover, there is little or no support for dynamic alteration of grading criteria, which means that refactoring of test classes after deployment is not easily done.
Our goal is to create a framework that can address these weaknesses. This framework needs to:
1. Support assignments that have interactive and graphical components.
2. Handle data structures in student programs such as lists, stacks, trees, and hash tables.
3. Be able to assign partial credit automatically when the instructor can predict errors in advance.
4. Provide additional answer clustering information to help graders identify and assign consistent partial credit for incorrect output that was not predefined.
Most importantly, these tools, collectively called RPM (short for Rapid Program Management), should interface effectively with our current grading support framework without requiring large amounts of rewriting or refactoring of test code
Automated Clustering and Program Repair for Introductory Programming Assignments
Providing feedback on programming assignments is a tedious task for the
instructor, and even impossible in large Massive Open Online Courses with
thousands of students. Previous research has suggested that program repair
techniques can be used to generate feedback in programming education. In this
paper, we present a novel fully automated program repair algorithm for
introductory programming assignments. The key idea of the technique, which
enables automation and scalability, is to use the existing correct student
solutions to repair the incorrect attempts. We evaluate the approach in two
experiments: (I) We evaluate the number, size and quality of the generated
repairs on 4,293 incorrect student attempts from an existing MOOC. We find that
our approach can repair 97% of student attempts, while 81% of those are small
repairs of good quality. (II) We conduct a preliminary user study on
performance and repair usefulness in an interactive teaching setting. We obtain
promising initial results (the average usefulness grade 3.4 on a scale from 1
to 5), and conclude that our approach can be used in an interactive setting.Comment: Extended version of the PLDI paper of the same nam
Peachy Parallel Assignments (EduHPC 2018)
Peachy Parallel Assignments are a resource for instructors teaching parallel and distributed programming. These are high-quality assignments, previously tested in class, that are readily adoptable. This collection of assignments includes implementing a subset of OpenMP using pthreads, creating an animated fractal, image processing using histogram equalization, simulating a storm of high-energy particles, and solving the wave equation in a variety of settings. All of these come with sample assignment sheets and the necessary starter code.Departamento de Informática (Arquitectura y Tecnología de Computadores, Ciencias de la Computación e Inteligencia Artificial, Lenguajes y Sistemas Informáticos)Facilitar la inclusión de ejercicios prácticos de programación paralela en cursos de Computación Paralela o de alto rendimiento (HPC)Comunicación en congreso: Descripción de ejercicios prácticos con acceso a material ya desarrollado y probado
Feedback Generation for Performance Problems in Introductory Programming Assignments
Providing feedback on programming assignments manually is a tedious, error
prone, and time-consuming task. In this paper, we motivate and address the
problem of generating feedback on performance aspects in introductory
programming assignments. We studied a large number of functionally correct
student solutions to introductory programming assignments and observed: (1)
There are different algorithmic strategies, with varying levels of efficiency,
for solving a given problem. These different strategies merit different
feedback. (2) The same algorithmic strategy can be implemented in countless
different ways, which are not relevant for reporting feedback on the student
program.
We propose a light-weight programming language extension that allows a
teacher to define an algorithmic strategy by specifying certain key values that
should occur during the execution of an implementation. We describe a dynamic
analysis based approach to test whether a student's program matches a teacher's
specification. Our experimental results illustrate the effectiveness of both
our specification language and our dynamic analysis. On one of our benchmarks
consisting of 2316 functionally correct implementations to 3 programming
problems, we identified 16 strategies that we were able to describe using our
specification language (in 95 minutes after inspecting 66, i.e., around 3%,
implementations). Our dynamic analysis correctly matched each implementation
with its corresponding specification, thereby automatically producing the
intended feedback.Comment: Tech report/extended version of FSE 2014 pape
Teaching Software Engineering through Robotics
This paper presents a newly-developed robotics programming course and reports
the initial results of software engineering education in robotics context.
Robotics programming, as a multidisciplinary course, puts equal emphasis on
software engineering and robotics. It teaches students proper software
engineering -- in particular, modularity and documentation -- by having them
implement four core robotics algorithms for an educational robot. To evaluate
the effect of software engineering education in robotics context, we analyze
pre- and post-class survey data and the four assignments our students completed
for the course. The analysis suggests that the students acquired an
understanding of software engineering techniques and principles
Effects of Automated Interventions in Programming Assignments: Evidence from a Field Experiment
A typical problem in MOOCs is the missing opportunity for course conductors
to individually support students in overcoming their problems and
misconceptions. This paper presents the results of automatically intervening on
struggling students during programming exercises and offering peer feedback and
tailored bonus exercises. To improve learning success, we do not want to
abolish instructionally desired trial and error but reduce extensive struggle
and demotivation. Therefore, we developed adaptive automatic just-in-time
interventions to encourage students to ask for help if they require
considerably more than average working time to solve an exercise. Additionally,
we offered students bonus exercises tailored for their individual weaknesses.
The approach was evaluated within a live course with over 5,000 active students
via a survey and metrics gathered alongside. Results show that we can increase
the call outs for help by up to 66% and lower the dwelling time until issuing
action. Learnings from the experiments can further be used to pinpoint course
material to be improved and tailor content to be audience specific.Comment: 10 page
Does choice of programming language affect student understanding of programming concepts in a first year engineering course?
Most undergraduate engineering curricula include computer programming to some degree,introducing a structured language such as C, or a computational system such as MATLAB, or both. Many of these curricula include programming in first year engineering courses, integrating the solution of simple engineering problems with an introduction to programming concepts. In line with this practice, Roger Williams University has included an introduction to programming as a part of the first year engineering curriculum for many years. However, recent industry and pedagogical trends have motivated the switch from a structured language (VBA) to a computational system (MATLAB). As a part of the pilot run of this change,the course instructors felt that it would be worthwhile to verify that changing the programming language did not negatively affect students’ ability to understand key programming concepts. In particular it was appropriate to explore students’ ability to translate word problems into computer programs containing inputs, decision statements, computational processes, and outputs. To test the hypothesis that programming language does not affect students’ ability to understand programming concepts, students from consecutive years were given the same homework assignment, with the first cohort using VBA and the second using MATLAB to solve the assignment. A rubric was developed which allowed the investigators to rate assignments independent of programming language. Results from this study indicate that there is not a significant impact of the change in programming language. These results suggest that the choice of programming language likely does not matter for student understanding of programming concepts. Course instructors should feel free to select programming language based on other factors, such as market demand, cost, or the availability of pedagogical resources
Marking complex assignments using peer assessment with an electronic voting system and an automated feedback tool
The work described in this paper relates to the development and use of a range of initiatives in order to mark complex masters' level assignments related to the development of computer web applications. In the past such assignments have proven difficult to mark since they assess a range of skills including programming, human computer interaction and design. Based on the experience of several years marking such assignments, the module delivery team decided to adopt an approach whereby the students marked each other's practical work using an electronic voting system (EVS). The results of this are presented in the paper along with statistical comparison with the tutors' marking, providing evidence for the efficacy of the approach. The second part of the assignment related to theory and documentation. This was marked by the tutors using an automated feedback tool. It was found that the time to mark the work was reduced by more than 30% in all cases compared to previous years. More importantly it was possible to provide good quality individual feedback to learners rapidly. Feedback was delivered to all within three weeks of the test submission datePeer reviewe
- …
