146 research outputs found

    Program Comprehension: Identifying Learning Trajectories for Novice Programmers

    Get PDF
    This working group asserts that Program Comprehension (PC) plays a critical part in the writing process. For example, this abstract is written from a basic draft that we have edited and revised until it clearly presents our idea. Similarly, a program is written in an incremental manner, with each step being tested, debugged and extended until the program achieves its goal. Novice programmers should develop their program comprehen- sion as they learn to code, so that they are able to read and reason about code while they are writing it. To foster such competencies our group has identified two main goals: (1) to collect and define learning activities that explicitly cover key components of program comprehension and (2) to define possible learning trajectories that will guide teachers using those learning activities in their CS0/CS1 or K-12 courses. [...

    Crowdsourcing Content Creation for SQL Practice

    Get PDF
    Crowdsourcing refers to the act of using the crowd to create content or to collect feedback on some particular tasks or ideas. Within computer science education, crowdsourcing has been used -- for example -- to create rehearsal questions and programming assignments. As a part of their computer science education, students often learn relational databases as well as working with the databases using SQL statements. In this article, we describe a system for practicing SQL statements. The system uses teacher-provided topics and assignments, augmented with crowdsourced assignments and reviews. We study how students use the system, what sort of feedback students provide to the teacher-generated and crowdsourced assignments, and how practice affects the feedback. Our results suggest that students rate assignments highly, and there are only minor differences between assignments generated by students and assignments generated by the instructor.Peer reviewe

    Experience Report: Thinkathon -- Countering an "I Got It Working" Mentality with Pencil-and-Paper Exercises

    Get PDF
    Goal-directed problem-solving labs can lead a student to believe that the most important achievement in a first programming course is to get programs working. This is counter to research indicating that code comprehension is an important developmental step for novice programmers. We observed this in our own CS-0 introductory programming course, and furthermore, that students weren't making the connection between code comprehension in labs and a final examination that required solutions to pencil-and-paper comprehension and writing exercises, where sound understanding of programming concepts is essential. Realising these deficiencies late in our course, we put on three 3-hour optional revision evenings just days before the exam. Based on a mastery learning philosophy, students were expected to work through a bank of around 200 pencil-and-paper exercises. By comparison with a machine-based hackathon, we called this a Thinkathon. Students completed a pre and post questionnaire about their experience of the Thinkathon. While we find that Thinkathon attendance positively influences final grades, we believe our reflection on the overall experience is of greater value. We report that: respected methods for developing code comprehension may not be enough on their own; novices must exercise their developing skills away from machines; and there are social learning outcomes in programming courses, currently implicit, that we should make explicit

    Experimenting with Model Solutions as a Support Mechanism

    Get PDF
    We describe an experiment from an introductory programming course where we provided students an opportunity to access model solutions of programming assignments they have not yet completed. Access to model solutions was controlled with coins, which students collected by completing programming assignments. The quantity of coins was limited so that students could buy solutions to at most one tenth of the course assignments. When compared to the traditional approach where access to model solutions is limited to only after the assignment is completed or the assignment deadline has passed, students seemed to enjoy the opportunity more and collecting coins motivated some students to complete more assignments. Collected coins were mostly used close to deadlines and on more difficult assignments. Overall, the use of coins and model solutions may be a viable option to providing students additional support. Data from the use of coins and model solutions could also be used to identify students who could benefit from additional guidance.Peer reviewe

    What Do We Think We Think We Are Doing?: Metacognition and Self-Regulation in Programming

    Get PDF
    Metacognition and self-regulation are popular areas of interest in programming education, and they have been extensively researched outside of computing. While computing education researchers should draw upon this prior work, programming education is unique enough that we should explore the extent to which prior work applies to our context. The goal of this systematic review is to support research on metacognition and self-regulation in programming education by synthesizing relevant theories, measurements, and prior work on these topics. By reviewing papers that mention metacognition or self-regulation in the context of programming, we aim to provide a benchmark of our current progress towards understanding these topics and recommendations for future research. In our results, we discuss eight common theories that are widely used outside of computing education research, half of which are commonly used in computing education research. We also highlight 11 theories on related constructs (e.g., self-efficacy) that have been used successfully to understand programming education. Towards measuring metacognition and self-regulation in learners, we discuss seven instruments and protocols that have been used and highlight their strengths and weaknesses. To benchmark the current state of research, we examined papers that primarily studied metacognition and self-regulation in programming education and synthesize the reported interventions used and results from that research. While the primary intended contribution of this paper is to support research, readers will also learn about developing and supporting metacognition and self-regulation of students in programming courses

    Novice Java Programming Mistakes: Large-Scale Data vs. Educator Beliefs

    Get PDF
    Teaching is the process of conveying knowledge and skills to learners. It involves preventing misunderstandings or correcting misconceptions that learners have acquired. Thus, effective teaching relies on solid knowledge of the discipline, but also a good grasp of where learners are likely to trip up or misunderstand. In programming, there is much opportunity for misunderstanding, and the penalties are harsh: failing to produce the correct syntax for a program, for example, can completely prevent any progress in learning how to program. Because programming is inherently computer-based, we have an opportunity to automatically observe programming behaviour -- more closely even than an educator in the room at the time. By observing students' programming behaviour, and surveying educators, we can ask: do educators have an accurate understanding of the mistakes that students are likely to make? In this study, we combined two years of the Blackbox dataset (with more than 900 thousand users and almost 100 million compilation events) with a survey of 76 educators to investigate which mistakes students make while learning to program Java, and whether the educators could make an accurate estimate of which mistakes were most common. We find that educators' estimates do not agree with one another or the student data, and discuss the implications of these results

    Automatic question generation about introductory programming code

    Get PDF
    Many students who learn to program end up writing code they do not understand. Most of the available code evaluation systems evaluate the submitted solution functionally and not the knowledge of the person who submitted it. This dissertation proposes a system that generates questions about the code submitted by the student, analyses their answers and returns the correct answers. In this way, students reflect about the code they have written and the teachers of the programming courses can better pinpoint their difficulties. We carried out an experiment with undergraduate and master’s students in Computer Science degrees in order to understand their difficulties and test the prototype’s robustness. We concluded that most students, although understanding simple details of the code they write, do not understand the behaviour of the program entirely, especially with respect to program state. Improvements to the prototype and how to conduct future experiments are also suggested.Muitos alunos que aprendem a programar acabam por escrever código que não entendem. A maior parte dos sistemas de avaliação de código disponíveis avaliam a solução submetida funcionalmente e não o conhecimento da pessoa que o submeteu. Esta dissertação propõe um sistema que gera questões sobre o código submetido pelo aluno, analisa as suas respostas e devolve as respostas corretas. Desta forma, os alunos refletem sobre o código que escreveram e os professores das unidades curriculares de programação conseguem identificar melhor as suas dificuldades. Conduzimos uma experiência com alunos de licenciatura e mestrado em Engenharia Informática e cursos relacionados de forma a perceber quais as suas dificuldades e testar a robustez do protótipo. Concluímos que a maior parte dos alunos, embora entendam detalhes simples do código que escrevem, não entendem o comportamento do programa na sua totalidade e o estado que este possui num determinado momento. São também sugeridas melhorias ao protótipo e à condução de futuras experiências

    Fostering Program Comprehension in Novice Programmers - Learning Activities and Learning Trajectories

    Get PDF
    This working group asserts that Program Comprehension (ProgComp) plays a critical part in the process of writing programs. For example, this paper is written from a basic draft that was edited and revised until it clearly presented our idea. Similarly, a program is written incrementally, with each step tested, debugged and extended until the program achieves its goal. Novice programmers should develop program comprehension skills as they learn to code so that they are able both to read and reason about code created by others, and to reflect on their code when writing, debugging or extending it. To foster such competencies our group identified two main goals: (g1) to collect and define learning activities that explicitly address key components of program comprehension and (g2) to define tentative theoretical learning trajectories that will guide teachers as they select and sequence those learning activities in their CS0/CS1/CS2 or K-12 courses. The WG has completed the first goal and laid down a strong foundation towards the second goal as presented in this report. After a thorough literature review, a detailed description of the Block Model is provided, as this model has been used with a dual purpose, to classify and present an extensive list of ProgComp tasks, and to describe a possible learning trajectory for a complex task, covering different cells of the Block Model matrix. The latter is intended to help instructors to decompose complex tasks and identify which aspects of ProgComp are being fostered

    Automatic question generation about introductory programming code

    Get PDF
    Many students who learn to program end up writing code they do not understand. Most of the available code evaluation systems evaluate the submitted solution functionally and not the knowledge of the person who submitted it. This dissertation proposes a system that generates questions about the code submitted by the student, analyses their answers and returns the correct answers. In this way, students reflect about the code they have written and the teachers of the programming courses can better pinpoint their difficulties. We carried out an experiment with undergraduate and master's students in Computer Science degrees in order to understand their difficulties and test the prototype's robustness. We concluded that most students, although understanding simple details of the code they write, do not understand the behaviour of the program entirely, especially with respect to program state. Improvements to the prototype and how to conduct future experiments are also suggested.Muitos alunos que aprendem a programar acabam por escrever código que não entendem. A maior parte dos sistemas de avaliação de código disponíveis avaliam a solução submetida funcionalmente e não o conhecimento da pessoa que o submeteu. Esta dissertação propõe um sistema que gera questões sobre o código submetido pelo aluno, analisa as suas respostas e devolve as respostas corretas. Desta forma, os alunos refletem sobre o código que escreveram e os professores das unidades curriculares de programação conseguem identificar melhor as suas dificuldades. Conduzimos uma experiência com alunos de licenciatura e mestrado em Engenharia Informática e cursos relacionados de forma a perceber quais as suas dificuldades e testar a robustez do protótipo. Concluímos que a maior parte dos alunos, embora entendam detalhes simples do código que escrevem, não entendem o comportamento do programa na sua totalidade e o estado que este possui num determinado momento. São também sugeridas melhorias ao protótipo e à condução de futuras experiências

    Emergence of computing education as a research discipline

    Get PDF
    This thesis investigates the changing nature and status of computing education research (CER) over a number of years, specifically addressing the question of whether computing education can legitimately be considered a research discipline. The principal approach to addressing this question is an examination of the published literature in computing education conferences and journals. A classification system was devised for this literature, one goal of the system being to clearly identify some publications as research – once a suitable definition of research was established. When the system is applied to a corpus of publications, it becomes possible to determine the proportion of those publications that are classified as research, and thence to detect trends over time and similarities and differences between publication venues. The classification system has been applied to all of the papers over several years in a number of major computing education conferences and journals. Much of the classification was done by the author alone, and the remainder by a team that he formed in order to assess the inter-rater reliability of the classification system. This classification work led to two subsequent projects, led by Associate Professor Judy Sheard and Professor Lauri Malmi, that devised and applied further classification systems to examine the research approaches and methods used in the work reported in computing education publications. Classification of nearly 2000 publications over ranges of 3-10 years uncovers both strong similarities and distinct differences between publication venues. It also establishes clear evidence of a substantial growth in the proportion of research papers over the years in question. These findings are considered in the light of published perspectives on what constitutes a discipline of research, and lead to a confident assertion that computing education can now rightly be considered a discipline of research
    corecore