4 research outputs found

    Automatic Evaluation of Python and C Programs with codecheck

    Get PDF
    This project enhances the codecheck autograder by implementing automatic evaluation of C and Python programs. Two security approaches are implemented and analyzed in order to complete this goal. The first approach involves isolation by using virtualization and the second approach involves hardening of the host operating system. I describe both implementations and measure their performance levels to see which approach is more efficient

    Automating Programming Assignment Marking with AST Analysis

    Get PDF
    This thesis presents a novel approach to automatically mark programming assignments. We hypothesize that correct student solution ASTs will be more similar to reference solution ASTs than incorrect student solutions and that their similarities can be quantitatively measured. Our approach first preprocesses the ASTs before computing their tree edit distances. We then aggregate the student's set of edit distances from every reference solution into a final mark for the student. We have implemented our approach in our ClangAutoMarker tool. Our experiments demonstrate promising potential for reducing a human marker's workload but further refinements are needed before its accuracy can be suitable for a live classroom

    An Exploration of Student Reasoning about Undergraduate Computer Science Concepts: An Active Learning Technique to Address Misconceptions

    Get PDF
    Computer science (CS) is a popular but often challenging major for undergraduates. As the importance of computing in the US and world economies continues to grow, the demand for successful CS majors grows accordingly. However, retention rates are low, particularly for under-represented groups such as women and racial minorities. Computing education researchers have begun to investigate causes and explore interventions to improve the success of CS students, from K-12 through higher education. In the undergraduate CS context, for example; student difficulties with pointers, functions, loops, and control flow have been observed. We and others have utilized student responses to multiple choice questions aimed at determining misconceptions, engaged in retroactive examination of code samples and design artifacts, and conducted interviews in an attempt to understand the nature of these problems. Interventions to address these problems often apply evidenced-based active learning techniques in CS classrooms as a way to engage students and improve learning.In this work, I employ a human-centered approach, one in which the focus of data collection is on the student thought processes as evidenced in their speech and writing. I seek to determine what students are thinking not only through what can be surmised in retrospect from the artifacts they create, but also to gain insight into their thoughts as they engage in the design, implementation,and analysis of those artifacts and as they reflect on those processes and artifacts shortly after. For my dissertation work, I have conducted four studies: 1. a conceptual assessment survey asking students to “Please explain your reasoning” after each answer to code tracing/execution questions followed by task-based interviews with a smaller, different group of students 2. a “coding in the wild” think aloud study that recorded the screen and audio of students as they implemented a simple program and explained their thought process 3. interview analyses of student design diagrams/documentation in a software engineering course, tasking students to explain their designs and comparing what they believed they had designed with what is actually shown from their submitted documentation. These first three studies were formative, leading to some key insights including the benefits students can gain from feedback, students’ tendencies to avoid complexity when programming or encountering concepts they do not fully grasp, the nature of student struggles with the planning stages of problem solving, and insight into the fragile understanding of some key CS concepts that students form. I leverage the benefits of feedback with guided prompts using the misconceptions uncovered in my formative studies to conduct a final, evaluative study. This study seeks to evaluate the benefits that can be gained from a guided feedback intervention for learning introductory programming concepts and compare those benefits and the effort and resource costs associated with each variation, comparing the costs and benefits associated with two forms of feedback. The first is an active learning technique I developed and deem misconception-based feedback (MBF), which has peers working in pairs use prompts based on misconceptions to guide their discussion of a recently completed coding assignment. The second is a human autograder (HAG) group acting as a control. HAG simulates typical autograders, supplying test cases and correct solutions, but utilizes a human stand-in for a computer. In both conditions, one student uses provided prompts to guide the discussion. The other student responds/interacts with their code based on the prompts. I captured screen and audio recordings of these discussions. Participants completed conceptual pre-tests and post-tests that asked them to explain their reasoning. I hypothesized that the MBF intervention will offer avaluable way to increase learning, address misconceptions, and get students more engaged that will be feasible in CS courses of any size and have benefits over the HAG intervention. Results show that for questions involving parameter passing with regards to pass by reference versus pass by value semantics, particularly with pointers, there were significant improvements in learning outcomes for the MBF group but not the HAG group

    Automatização de feedback para apoiar o aprendizado no processo de resolução de problemas de programação.

    Get PDF
    No ensino de programação, é fundamental que os estudantes realizem atividades práticas. Para que sejam bem sucedidos nessas atividades, os professores devem guiá-los, especialmente os iniciantes, ao longo do processo de programação. Consideramos que o processo de programação, no contexto do ensino desta prática, engloba as atividades necessárias para resolver um problema de computação. Este processo é composto por uma série de etapas que são executadas de forma não linear, mas sim iterativa. Nós consideramos o processo de programação adaptado de Polya (1957) para a resolução de problemas de programação, que inclui os seguintes passos [Pól57]: (1) Entender o problema, (2) Planejar a solução, (3) Implementar o programa e (4) Revisar. Com o foco no quarto estágio, nós almejamos que os estudantes tornem-se proficientes em corrigir as suas estratégias e, através de reflexão crítica, serem capazes de refatorar os seus códigos tendo em vista a boa qualidade de programação. Durante a pesquisa deste doutorado, nós desenvolvemos uma abordagem para gerar e fornecer feedback na última fase do processo de programação: avaliação da solução. O desafio foi entregar aos estudantes feedback elaborado e a tempo, referente ás atividades de programação, de forma a estimulá-los a pensar sobre o problema e a sua solução e melhorar as suas habilidades. Como requisito para a geração de feedback, comprometemo-nos a não impormais carga de trabalho aos professores, evitando-os de criar novos artefatos. Extraímos informações a partir do material instrucional já desenvolvido pelos professores quando da criação de uma nova atividade de programação: a solução de referência. Implementamos e avaliamos nossa proposta em um curso de programação introdutória em um estudo longitudinal. Os resultados obtidos no nosso estudo vão além da desejada melhoria na qualidade de código. Observamos que os alunos foram incentivados a melhorar as suas habilidades de programação estimulados pelo exercício de raciocinar sobre uma solução para um problema que já está funcionando.In programming education, the development of students’ programming skills through practical programming assignments is a fundamental activity. In order to succeed in those assignments, instructors need to provide guidance, especially to novice learners, about the programming process. We consider that this process, in the context of programming education, encompasses steps needed to solve a computer-programming problem. We took into consideration the programming process adapted from Polya (1957) to computer programming problem-solving, that includes the following stages [Pól57]: (1) Understand the problem; (2) Plan the solution; (3) Implement the program and (4) Look Back. Focusing on the fourth stage, we want students to be proficient in correcting strategies and, with critical reflection, being able to refactor their code caring about good programming quality. During this doctoral research, we developed an approach to generate formative feedback to leverage programming problem-solving in the last stage of the programming process: targeting the solution evaluation. The challenge was to provide timely and elaborated feedback, referring to programming assignments, to stimulate students to reason about the problem and their solution, aiming to improve their programming skills. As a requirement for generating feedback, we compromised not to impose the creation of new artifacts or instructional materials to instructors, but to take advantage of a usual resource already created when proposing a new programming assignment: the reference solution. We implemented and evaluated our proposal in an introductory programming course in a longitudinal study. The results go beyond what we initially expected: the improved assignments’ code quality. We observed that students felt stimulated, and in fact, improved their programming abilities driven by the exercise of reasoning about their already functioning solution
    corecore