7 research outputs found

    How teachers would help students to improve their code

    Get PDF

    On Novices\u27 Interaction with Compiler Error Messages: A Human Factors Approach

    Get PDF
    The difficulty in understanding compiler error messages can be a major impediment to novice student learning. To alleviate this issue, multiple researchers have run experiments enhancing compiler error messages in automated assessment tools for programming assignments. The conclusions reached by these published experiments appear to be conflicting. We examine these experiments and propose five potential reasons for the inconsistent conclusions concerning enhanced compiler error messages: (1) students do not read them, (2) researchers are measuring the wrong thing, (3) the effects are hard to measure, (4) the messages are not properly designed, (5) the messages are properly designed, but students do not understand them in context due to increased cognitive load. We constructed mixed-methods experiments designed to address reasons 1 and 5 with a specific automated assessment tool, Athene, that previously reported inconclusive results. Testing student comprehension of the enhanced compiler error messages outside the context of an automated assessment tool demonstrated their effectiveness over standard compiler error messages. Quantitative results from a 60 minute one-on-one think-aloud study with 31 students did not show substantial increase in student learning outcomes over the control. However, qualitative results from the one-on-one thinkaloud study indicated that most students are reading the enhanced compiler error messages and generally make effective changes after encountering them

    Beyond Automated Assessment: Building Metacognitive Awareness in Novice Programmers in CS1

    Get PDF
    The primary task of learning to program in introductory computer science courses (CS1) cognitively overloads novices and must be better supported. Several recent studies have attempted to address this problem by understanding the role of metacognitive awareness in novices learning programming. These studies have focused on teaching metacognitive awareness to students by helping them understand the six stages of learning so students can know where they are in the problem-solving process, but these approaches are not scalable. One way to address scalability is to implement features in an automated assessment tool (AAT) that build metacognitive awareness in novice programmers. Currently, AATs that provide feedback messages to students can be said to implement the fifth and sixth learning stages integral to metacognitive awareness: implement solution (compilation) and evaluate implemented solution (test cases). The computer science education (CSed) community is actively engaged in research on the efficacy of compile error messages (CEMs) and how best to enhance them to maximize student learning and it is currently heavily disputed whether or not enhanced compile error messages (ECEMs) in AATs actually improve student learning. The discussion on the effectiveness of ECEMs in AATs remains focused on only one learning stage critical to metacognitive awareness in novices: implement solution. This research carries out an ethnomethodologically-informed study of CS1 students via think-aloud studies and interviews in order to propose a framework for designing an AAT that builds metacognitive awareness by supporting novices through all six stages of learning. The results of this study provide two important contributions. The first is the confirmation that ECEMs that are designed from a human-factors approach are more helpful for students than standard compiler error messages. The second important contribution is that the results from the observations and post-assessment interviews revealed the difficulties novice programmers often face to developing metacognitive awareness when using an AAT. Understanding these barriers revealed concrete ways to help novice programmers through all six stages of the problem-solving process. This was presented above as a framework of features, which when implemented properly, provides a scalable way to implicitly produce metacognitive awareness in novice programmers

    Semi-automated assessment of programming languages for novice programmers

    Get PDF
    There has recently been an increased emphasis on the importance of learning programming languages, not only in higher education but also in secondary schools. Students of a variety of departments such as physics, mathematics and engineering have also started learning programming languages as part of their academic courses. Assessment of students programming solutions is therefore important for developing their programming skills. Many Computer Based Assessment (CBA) systems utilise multiple-choice questions (MCQ) to evaluate students performance. However, MCQs lack the ability to comprehensively assess students knowledge. Thus, other forms of programming solutions are required to assess students knowledge. This research aims to develop a semi-automated assessment framework for novice programmers, utilising a computer to support the marking process. The research also focuses on ensuring the consistency of feedback. A novel marking process model is developed based on the semi-automated assessment approach which supports a new way of marking, termed segmented marking . A study is carried out to investigate and demonstrate the feasibility of the segmented marking technique. In addition, the new marking process model is developed based on the results of the feasibility study, and two novel marking process models are presented based on segmented marking, namely the full-marking and partial-marking process models. The Case-Based Reasoning (CBR) cycle is adopted in the marking process models in order to ensure the consistency of feedback. User interfaces of the prototype marking tools (full and partial) are designed and developed based on the marking process models and the user interface design requirements. The experimental results show that the full and partial marking techniques are feasible for use in formative assessment. Furthermore, the results also highlight that the tools are capable of providing consistent and personalised feedback and that they considerably reduce markers workload

    Automatização de feedback para apoiar o aprendizado no processo de resolução de problemas de programação.

    Get PDF
    No ensino de programação, é fundamental que os estudantes realizem atividades práticas. Para que sejam bem sucedidos nessas atividades, os professores devem guiá-los, especialmente os iniciantes, ao longo do processo de programação. Consideramos que o processo de programação, no contexto do ensino desta prática, engloba as atividades necessárias para resolver um problema de computação. Este processo é composto por uma série de etapas que são executadas de forma não linear, mas sim iterativa. Nós consideramos o processo de programação adaptado de Polya (1957) para a resolução de problemas de programação, que inclui os seguintes passos [Pól57]: (1) Entender o problema, (2) Planejar a solução, (3) Implementar o programa e (4) Revisar. Com o foco no quarto estágio, nós almejamos que os estudantes tornem-se proficientes em corrigir as suas estratégias e, através de reflexão crítica, serem capazes de refatorar os seus códigos tendo em vista a boa qualidade de programação. Durante a pesquisa deste doutorado, nós desenvolvemos uma abordagem para gerar e fornecer feedback na última fase do processo de programação: avaliação da solução. O desafio foi entregar aos estudantes feedback elaborado e a tempo, referente ás atividades de programação, de forma a estimulá-los a pensar sobre o problema e a sua solução e melhorar as suas habilidades. Como requisito para a geração de feedback, comprometemo-nos a não impormais carga de trabalho aos professores, evitando-os de criar novos artefatos. Extraímos informações a partir do material instrucional já desenvolvido pelos professores quando da criação de uma nova atividade de programação: a solução de referência. Implementamos e avaliamos nossa proposta em um curso de programação introdutória em um estudo longitudinal. Os resultados obtidos no nosso estudo vão além da desejada melhoria na qualidade de código. Observamos que os alunos foram incentivados a melhorar as suas habilidades de programação estimulados pelo exercício de raciocinar sobre uma solução para um problema que já está funcionando.In programming education, the development of students’ programming skills through practical programming assignments is a fundamental activity. In order to succeed in those assignments, instructors need to provide guidance, especially to novice learners, about the programming process. We consider that this process, in the context of programming education, encompasses steps needed to solve a computer-programming problem. We took into consideration the programming process adapted from Polya (1957) to computer programming problem-solving, that includes the following stages [Pól57]: (1) Understand the problem; (2) Plan the solution; (3) Implement the program and (4) Look Back. Focusing on the fourth stage, we want students to be proficient in correcting strategies and, with critical reflection, being able to refactor their code caring about good programming quality. During this doctoral research, we developed an approach to generate formative feedback to leverage programming problem-solving in the last stage of the programming process: targeting the solution evaluation. The challenge was to provide timely and elaborated feedback, referring to programming assignments, to stimulate students to reason about the problem and their solution, aiming to improve their programming skills. As a requirement for generating feedback, we compromised not to impose the creation of new artifacts or instructional materials to instructors, but to take advantage of a usual resource already created when proposing a new programming assignment: the reference solution. We implemented and evaluated our proposal in an introductory programming course in a longitudinal study. The results go beyond what we initially expected: the improved assignments’ code quality. We observed that students felt stimulated, and in fact, improved their programming abilities driven by the exercise of reasoning about their already functioning solution

    An Empirical Study of Iterative Improvement in Programming Assignments

    No full text
    As automated tools for grading programming assignments become more widely used, it is imperative that we better understand how students are utilizing them. Other researchers have provided helpful data on the role automated assessment tools (AATs) have played in the classroom. In order to investigate improved practices in using AATs for student learning, we sought to better understand how students iteratively modify their programs toward a solution by analyzing more than 45,000 student submissions over 7 semesters in an introductory (CS1) programming course. The resulting metrics allowed us to study what steps students took toward solutions for programming assignments. This paper considers the incremental changes students make and the correlating score between sequential submissions, measured by metrics including source lines of code, cyclomatic (McCabe) complexity, state space, and the 6 Halstead measures of complexity of the program. We demonstrate the value of throttling and show that generating software metrics for analysis can serve to help instructors better guide student learning
    corecore