33,384 research outputs found
Automated Feedback for 'Fill in the Gap' Programming Exercises
Timely feedback is a vital component in the learning process. It is especially important for beginner students in Information Technology since many have not yet formed an effective internal model of a computer that they can use to construct viable knowledge. Research has shown that learning efficiency is increased if immediate feedback is provided for students. Automatic analysis of student programs has the potential to provide immediate feedback for students and to assist teaching staff in the marking process. This paper describes a āfill in the gapā programming analysis framework which tests studentsā solutions and gives feedback on their correctness, detects logic errors and provides hints on how to fix these errors. Currently, the framework is being used with the Environment for Learning to Programming (ELP) system at Queensland University of Technology (QUT); however, the framework can be integrated into any existing online learning environment or programming Integrated Development Environment (IDE
A methodology for producing reliable software, volume 1
An investigation into the areas having an impact on producing reliable software including automated verification tools, software modeling, testing techniques, structured programming, and management techniques is presented. This final report contains the results of this investigation, analysis of each technique, and the definition of a methodology for producing reliable software
Semi Automated Partial Credit Grading of Programming Assignments
The grading of student programs is a time consuming process. As class sizes continue to grow, especially in entry level courses, manually grading student programs has become an even more daunting challenge. Increasing the difficulty of grading is the needs of graphical and interactive programs such as those used as part of the UNH Computer Science curriculum (and various textbooks).
There are existing tools that support the grading of introductory programming assignments (TAME and Web-CAT). There are also frameworks that can be used to test student code (JUnit, Tester, and TestNG). While these programs and frameworks are helpful, they have little or no no support for programs that use real data structures or that have interactive or graphical features. In addition, the automated tests in all these tools provide only āall or nothingā evaluation. This is a significant limitation in many circumstances. Moreover, there is little or no support for dynamic alteration of grading criteria, which means that refactoring of test classes after deployment is not easily done.
Our goal is to create a framework that can address these weaknesses. This framework needs to:
1. Support assignments that have interactive and graphical components.
2. Handle data structures in student programs such as lists, stacks, trees, and hash tables.
3. Be able to assign partial credit automatically when the instructor can predict errors in advance.
4. Provide additional answer clustering information to help graders identify and assign consistent partial credit for incorrect output that was not predefined.
Most importantly, these tools, collectively called RPM (short for Rapid Program Management), should interface effectively with our current grading support framework without requiring large amounts of rewriting or refactoring of test code
Spacelab software development and integration concepts study report, volume 1
The proposed software guidelines to be followed by the European Space Research Organization in the development of software for the Spacelab being developed for use as a payload for the space shuttle are documented. Concepts, techniques, and tools needed to assure the success of a programming project are defined as they relate to operation of the data management subsystem, support of experiments and space applications, use with ground support equipment, and for integration testing
A hybrid method for the analysis of learner behaviour in active learning environments
Software-mediated learning requires adjustments in the teaching and learning process. In particular active learning facilitated through interactive learning software differs from traditional instructor-oriented, classroom-based teaching. We present behaviour analysis techniques for Web-mediated learning. Motivation, acceptance of the learning approach and technology, learning organisation and actual tool usage are aspects of behaviour that require different analysis techniques to be used. A hybrid method based on a combination of survey methods and Web usage mining techniques can provide accurate and comprehensive analysis results. These techniques allow us to evaluate active learning approaches implemented in form of Web tutorials
Effects of Automated Interventions in Programming Assignments: Evidence from a Field Experiment
A typical problem in MOOCs is the missing opportunity for course conductors
to individually support students in overcoming their problems and
misconceptions. This paper presents the results of automatically intervening on
struggling students during programming exercises and offering peer feedback and
tailored bonus exercises. To improve learning success, we do not want to
abolish instructionally desired trial and error but reduce extensive struggle
and demotivation. Therefore, we developed adaptive automatic just-in-time
interventions to encourage students to ask for help if they require
considerably more than average working time to solve an exercise. Additionally,
we offered students bonus exercises tailored for their individual weaknesses.
The approach was evaluated within a live course with over 5,000 active students
via a survey and metrics gathered alongside. Results show that we can increase
the call outs for help by up to 66% and lower the dwelling time until issuing
action. Learnings from the experiments can further be used to pinpoint course
material to be improved and tailor content to be audience specific.Comment: 10 page
- ā¦