3,891 research outputs found

    Semi Automated Partial Credit Grading of Programming Assignments

    Get PDF
    The grading of student programs is a time consuming process. As class sizes continue to grow, especially in entry level courses, manually grading student programs has become an even more daunting challenge. Increasing the difficulty of grading is the needs of graphical and interactive programs such as those used as part of the UNH Computer Science curriculum (and various textbooks). There are existing tools that support the grading of introductory programming assignments (TAME and Web-CAT). There are also frameworks that can be used to test student code (JUnit, Tester, and TestNG). While these programs and frameworks are helpful, they have little or no no support for programs that use real data structures or that have interactive or graphical features. In addition, the automated tests in all these tools provide only “all or nothing” evaluation. This is a significant limitation in many circumstances. Moreover, there is little or no support for dynamic alteration of grading criteria, which means that refactoring of test classes after deployment is not easily done. Our goal is to create a framework that can address these weaknesses. This framework needs to: 1. Support assignments that have interactive and graphical components. 2. Handle data structures in student programs such as lists, stacks, trees, and hash tables. 3. Be able to assign partial credit automatically when the instructor can predict errors in advance. 4. Provide additional answer clustering information to help graders identify and assign consistent partial credit for incorrect output that was not predefined. Most importantly, these tools, collectively called RPM (short for Rapid Program Management), should interface effectively with our current grading support framework without requiring large amounts of rewriting or refactoring of test code

    Developing Applications to Automatically Grade Introductory Visual Basic Courses

    Get PDF
    There are many unique challenges associated with introductory programming courses. For novice programmers, the challenges of their first programming class can lead to a great deal of stress and frustration. Regular programming assignments is often key to developing an understanding of best practices and the coding process. Students need practice with these new concepts to reinforce the underlying principles. Providing timely and consistent feedback on these assignments can be a challenge for instructors, particularly in large classes. Plagiarism is also a concern. Unfortunately traditional tools are not well suited to introductory courses. This paper describes how AppGrader, a static code assessment tool can be used to address the challenges of an introductory programming class. The tool assesses student’s understanding and application of programming fundaments as defined in the current ACM/IEEE Information Technology Curriculum Guidelines. Results from a bench test and directions for future research are provided

    Teaching how to program using automated assessment and functional glossy games (Experience Report)

    Get PDF
    Our department has long been an advocate of the functional-first school of programming and has been teaching Haskell as a first language in introductory programming course units for 20 years. Although the functional style is largely beneficial, it needs to be taught in an enthusiastic and captivating way to fight the unusually high computer science drop-out rates and appeal to a heterogeneous population of students.This paper reports our experience of restructuring, over the last 5 years, an introductory laboratory course unit that trains hands-on functional programming concepts and good software development practices. We have been using game programming to keep students motivated, and following a methodology that hinges on test-driven development and continuous bidirectional feedback. We summarise successes and missteps, and how we have learned from our experience to arrive at a model for comprehensive and interactive functional game programming assignments and a general functionally-powered automated assessment platform, that together provide a more engaging learning experience for students. In our experience, we have been able to teach increasingly more advanced functional programming concepts while improving student engagement.The authors would like to thank the precursors of the 20-year functional programming culture and FPro unit at our university, and all the instructors and TAs that have been involved in the PLab unit throughout the years. This work is financed by the ERDFs European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation - COMPETE 2020 Programme within project POCI-01-0145-FEDER-006961, and by National Funds through the Portuguese funding agency, FCT s Fundacao para a Ciencia e a Tecnologia as part of project UID/EEA/50014/2013

    Understanding and Addressing Misconceptions in Introductory Programming: A Data-Driven Approach

    Get PDF
    With the expansion of computer science (CS) education, CS teachers in K-12 schools should be cognizant of student misconceptions and be prepared to help students establish accurate understanding of computer science and programming. This exploratory design-based research (DBR) study implemented a data-driven approach to identify secondary school students’ misconceptions using both their compilation and test errors and provide targeted feedback to promote students’ conceptual change in introductory programming. Research subjects were two groups of high school students enrolled in two sections of a Java-based programming course in a 2017 summer residential program for gifted and talented students. This study consisted of two stages. In the first stage, students of group 1 took the introductory programming class and used an automated learning system, Mulberry, which collected data on student problem-solving attempts. Data analysis was conducted to identify common programming errors students demonstrated in their programs and relevant misconceptions. In the second stage, targeted feedback to address these misconceptions was designed using principles from conceptual change and feedback theories and added to Mulberry. When students of group 2 took the same introductory programming class and solved programming problems in Mulberry, they received the targeted feedback to address their misconceptions. Data analysis was conducted to assess how the feedback affected the evolution of students’ (mis)conceptions. Using students’ erroneous solutions, 55 distinct compilation errors were identified, and 15 of them were categorized as common ones. The 15 common compilation errors accounted for 92% of all compilation errors. Based on the 15 common compilation errors, three underlying student misconceptions were identified, including deficient knowledge of fundamental Java program structure, misunderstandings of Java expressions, and confusion about Java variables. In addition, 10 common test errors were identified based on nine difficult problems. The results showed that 54% of all test errors were related to the difficult problems, and the 10 common test errors accounted for 39% of all test errors of the difficult problems. Four common student misconceptions were identified based on the 10 common test errors, including misunderstandings of Java input, misunderstandings of Java output, confusion about Java operators, and forgetting to consider special cases. Both quantitative and qualitative data analysis were conducted to see whether and how the targeted feedback affected students’ solutions. Quantitative analysis indicated that targeted feedback messages enhanced students’ rates of improving erroneous solutions. Group 2 students showed significantly higher improvement rates in all erroneous solutions and solutions with common errors compared to group 1 students. Within group 2, solutions with targeted feedback messages resulted in significantly higher improvement rates compared to solutions without targeted feedback messages. Results suggest that with targeted feedback messages students were more likely to correct errors in their code. Qualitative analysis of students’ solutions of four selected cases determined that students of group 2, when improving their code, made fewer intermediate incorrect solutions than students in group 1. The targeted feedback messages appear to have helped to promote conceptual change. The results of this study suggest that a data-driven approach to understanding and addressing student misconceptions, which is using student data in automated assessment systems, has the potential to improve students’ learning of programming and may help teachers build better understanding of their students’ common misconceptions and develop their pedagogical content knowledge (PCK). The use of automated assessment systems with misconception identification components may be helpful in pre-college introductory programming courses and so is encouraged as K-12 CS education expands. Researchers and developers of automated assessment systems should develop components that support identifying common student misconceptions using both compilation and non-compilation errors. Future research should continue to investigate the use of targeted feedback in automated assessment systems to address students’ misconceptions and promote conceptual change in computer science education

    Critiquing Antipatterns In Novice Code

    Get PDF
    Students in introductory computer science courses, are learning to program. Indeed, most students perceive that learning to code is the central topic explored in the courses. Students spend an enormous amount of time struggling to learn the syntax and understand semantics of a particular language. Instructors spend a similar amount of time reading student code and explaining the meaning of the cryptic error messages displayed by compilers. Messages provided by compilers are intended to give feedback on the adherence of one’s code to the language specification and conventions. Unfortunately, these message are geared towards experts who have a clear understanding of the language syntax and semantics and a deep model of what comprises a program and how a program is developed. These students are novices who lack fundamental understanding of the structure of a program and have no basic mental model of how a program works. Novices make different kinds of mistakes than experts. Instructors need to spend a lot of time simply assisting novices in using compilers and understanding their output. In addition to mastering the syntax and semantics of their first programming language, novices are exposed to the question of what constitutes good design. Instructors can identify virtuous design choices and articulate areas of improvement. But contact time with students is limited, and waiting for in-person feedback or replies to personal messages can be a critical delay. Novices, still struggling to use the compiler, have not yet developed the sophisticated analytical processes employed by experts and this is reflected in their design choices and the kinds of mistakes they make. When a novice approaches an instructor with a question, the instructor must often provide a balanced critique that assists the student with understanding both the structure and the design aspects of their own code. My research has focused on whether we can identify examples of early programming antipatterns that have arisen from our teaching experience, and describe different ways of detecting them automatically. Novice students may produce code that is close to a correct solution but contains syntactic errors; code critiquers attempt to salvage the promising portions of the students submission and suggest repairs in ways more meaningful than typical compiler error messages. Alternatively, a student misunderstanding may result in well-formed code that passes unit tests yet contains clear design flaws; through additional analysis, code critiquers can detect and flag these flaws. Finally, certain types of antipatterns can be anticipated and flagged by the instructor, based on the context of the course and the programming activity; code critiquers allow for customizable critique triggers and messages. This dissertation presents several key contributions to our understanding of novice misconceptions and their representation, diagnosis and repair using antipatterns. My research focuses on identifying antipatterns and detecting them in novice code, then using this information to provide the student with a meaningful critique of their work. I have developed WebTA, a tool to critique student programs in introductory computer science courses. WebTA is used to teach students test-driven agile development methods through small cycles of teaching, coding integrated with testing, and immediate feedback.Through the use of WebTA in introductory computer science courses since 2014, I have amassed a significant corpus of novice programmer submission data. Lastly, I have compiled a library of antipatterns found in novice code

    Automated Program Analysis for Novice Programmers

    Get PDF
    [EN] This paper describes how to adapt a static code analyzer to provide feedback novice programmers and their teachers. Current analyzers have been built to give feedback to experienced programmers who work on software projects or systems. The type of feedback and the type of analysis of these tools focusses on mistakes that are relevant within that context, and help with debugging software system. When teaching novice programmers this type of advice is often not particularly useful. It would be instead more useful to use these techniques to identify problem in the understanding of students of important programming concepts. This paper first explores in what respect static analyzers support the learning and teaching of programming, and what can be implemented based on existing static analysis technology. It presents an extension of static analyzer PMD to create feedback that is more valuable to novice programmers. To answer the question if these techniques are able to find conceptual mistakes that are characteristic for novice programmers make, we ran it over a number of student projects, and compared these results with publicly available mature software projects.Blok, T.; Fehnker, A. (2017). Automated Program Analysis for Novice Programmers. En Proceedings of the 3rd International Conference on Higher Education Advances. Editorial Universitat Politècnica de València. 1138-1146. https://doi.org/10.4995/HEAD17.2017.5533OCS1138114

    TLAD 2010 Proceedings:8th international workshop on teaching, learning and assesment of databases (TLAD)

    Get PDF
    This is the eighth in the series of highly successful international workshops on the Teaching, Learning and Assessment of Databases (TLAD 2010), which once again is held as a workshop of BNCOD 2010 - the 27th International Information Systems Conference. TLAD 2010 is held on the 28th June at the beautiful Dudhope Castle at the Abertay University, just before BNCOD, and hopes to be just as successful as its predecessors.The teaching of databases is central to all Computing Science, Software Engineering, Information Systems and Information Technology courses, and this year, the workshop aims to continue the tradition of bringing together both database teachers and researchers, in order to share good learning, teaching and assessment practice and experience, and further the growing community amongst database academics. As well as attracting academics from the UK community, the workshop has also been successful in attracting academics from the wider international community, through serving on the programme committee, and attending and presenting papers.This year, the workshop includes an invited talk given by Richard Cooper (of the University of Glasgow) who will present a discussion and some results from the Database Disciplinary Commons which was held in the UK over the academic year. Due to the healthy number of high quality submissions this year, the workshop will also present seven peer reviewed papers, and six refereed poster papers. Of the seven presented papers, three will be presented as full papers and four as short papers. These papers and posters cover a number of themes, including: approaches to teaching databases, e.g. group centered and problem based learning; use of novel case studies, e.g. forensics and XML data; techniques and approaches for improving teaching and student learning processes; assessment techniques, e.g. peer review; methods for improving students abilities to develop database queries and develop E-R diagrams; and e-learning platforms for supporting teaching and learning
    • …
    corecore